---
title: "Multilevel Robust Bayesian Model-Averaged Meta-Regression"
author: "František Bartoš"
date: "17th of December 2025 (updated: 29th of April 2026)"
output:
  rmarkdown::html_vignette:
    self_contained: yes
bibliography: ../inst/REFERENCES.bib
csl: ../inst/apa.csl
link-citations: true
vignette: >
  %\VignetteIndexEntry{Multilevel Robust Bayesian Model-Averaged Meta-Regression}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
  %\VignetteEngine{knitr::rmarkdown_notangle}
---

```{r child = "_vignette-nowrap.md", echo = FALSE, eval = TRUE}
```

```{r setup, include = FALSE}
source("vignette-cache.R", local = knitr::knit_global())

cached_fits <- vignette_cache(
  name     = "v33-robma-multilevel-metaregression",
  objects  = "fit_reg"
)

knitr::opts_chunk$set(
  collapse    = TRUE,
  comment     = "#>",
  eval        = vignette_cache_eval(cached_fits),
  message     = FALSE,
  warning     = FALSE,
  fig.width   = 7,
  fig.height  = 3.5,
  dev         = "png",
  fig.retina  = 3
)
if (.Platform$OS.type == "windows") {
  knitr::opts_chunk$set(dev.args = list(type = "cairo"))
}
```

```{r load-models, include = FALSE}
library("RoBMA")

data("Havrankova2025", package = "RoBMA")

unit_information_sd_361 <- estimate_unit_information_sd(
  sei = Havrankova2025$se,
  ni  = Havrankova2025$N
)
prior_scale_361 <- unit_information_sd_361 * 0.5

vignette_cache_load(cached_fits)
```

```{r precompute-models, include = FALSE, eval = FALSE}
library("RoBMA")

data("Havrankova2025", package = "RoBMA")

unit_information_sd_361 <- estimate_unit_information_sd(
  sei = Havrankova2025$se,
  ni  = Havrankova2025$N
)
prior_scale_361 <- unit_information_sd_361 * 0.5

prior_effect_361        <- prior("normal",   list(mean = 0, sd = prior_scale_361))
prior_heterogeneity_361 <- prior("invgamma", list(shape = 1, scale = 0.15 * prior_scale_361))
prior_mods_361          <- list(
  facing_customer = prior_factor("mnormal", list(mean = 0, sd = 0.25 * prior_scale_361), contrast = "meandif")
)
prior_bias_361          <- list(
  prior_weightfunction("two-sided", c(0.05),             wf_cumulative(c(1, 1)),       prior_weights = 1/12),
  prior_weightfunction("two-sided", c(0.05, 0.10),       wf_cumulative(c(1, 1, 1)),    prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.05),             wf_cumulative(c(1, 1)),       prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.025, 0.05),      wf_cumulative(c(1, 1, 1)),    prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.05, 0.5),        wf_cumulative(c(1, 1, 1)),    prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.025, 0.05, 0.5), wf_cumulative(c(1, 1, 1, 1)), prior_weights = 1/12),
  prior_PET("Cauchy", list(0, 1), list(0, Inf), 1/4),
  prior_PEESE("Cauchy", list(0, 5 / prior_scale_361), list(0, Inf), 1/4)
)

fit_reg <- RoBMA(
  yi = y, sei = se, ni = N, measure = "GEN",
  mods = ~ facing_customer,
  cluster = study_id,
  prior_unit_information_sd = unit_information_sd_361,
  prior_effect              = prior_effect_361,
  prior_heterogeneity       = prior_heterogeneity_361,
  prior_mods                = prior_mods_361,
  prior_bias                = prior_bias_361,
  sample = 20000, burnin = 10000, adapt = 10000,
  thin = 5, parallel = TRUE, seed = 1,
  data = Havrankova2025
)

vignette_cache_save(cached_fits)
```

**This vignette accompanies the manuscript [Robust Bayesian Multilevel Meta-Analysis: Adjusting for Publication Bias in the Presence of Dependent Effect Sizes](https://doi.org/10.31234/osf.io/9tgp2_v1) preprinted at *PsyArXiv* [@bartos2025robust].**

This vignette reproduces the second example from the manuscript, re-analyzing data from @havrankova2025beauty on the relationship between beauty and professional success (1,159 effect sizes from 67 studies).
We investigate whether the effect depends on the type of customer contact ("no", "some", or "direct").
For the first example describing a publication-bias-adjusted multilevel meta-analysis, see the [*Multilevel Robust Bayesian Meta-Analysis*](v32-robma-multilevel.html) vignette.

## Rescaling Prior Distributions for Non-Standardized Effect Sizes

The effect sizes in this dataset are not standardized: they are percent increases in earnings associated with a one-standard-deviation increase in beauty.
Therefore, the prior distributions need to be defined on this outcome scale.
Overly narrow prior distributions would make it difficult to find evidence for the effect because both models would predict effects very close to zero.
Overly wide prior distributions would bias the results towards the null hypothesis because alternative models would predict unrealistically large effects.
The 4.0 version of the `RoBMA` R package handles this automatically by estimating the unit-information standard deviation (UISD) and rescaling the prior distributions appropriately (see the [*Prior Distributions*](v01-prior-distributions.html) vignette).

To reproduce the RoBMA 3.6.1 version results that used different prior distribution formulation and scaling settings we need to specify the prior distributions manually.[^scaling]

[^scaling]: This reproduces the RoBMA 3.6.1 vignette results. It does not match the current default UISD scaling conventions exactly; in particular, the current default moderator and PEESE scales differ and are therefore specified below.

```{r prior-specification}
prior_effect_361        <- prior("normal",   list(mean = 0, sd = 25.8))
prior_heterogeneity_361 <- prior("invgamma", list(shape = 1, scale = 0.15 * 25.8))
prior_mods_361          <- list(
  facing_customer = prior_factor("mnormal", list(mean = 0, sd = 0.25 * 25.8), contrast = "meandif")
)
prior_bias_361          <- list(
  prior_weightfunction("two-sided", c(0.05),             wf_cumulative(c(1, 1)),       prior_weights = 1/12),
  prior_weightfunction("two-sided", c(0.05, 0.10),       wf_cumulative(c(1, 1, 1)),    prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.05),             wf_cumulative(c(1, 1)),       prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.025, 0.05),      wf_cumulative(c(1, 1, 1)),    prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.05, 0.5),        wf_cumulative(c(1, 1, 1)),    prior_weights = 1/12),
  prior_weightfunction("one-sided", c(0.025, 0.05, 0.5), wf_cumulative(c(1, 1, 1, 1)), prior_weights = 1/12),
  prior_PET("Cauchy", list(0, 1), list(0, Inf), 1/4),
  prior_PEESE("Cauchy", list(0, 5 / 25.8), list(0, Inf), 1/4)
)
```

We fit the robust Bayesian multilevel meta-regression using the `RoBMA()` function.
We specify the effect sizes via `yi`, standard errors via `sei`, and the effect-size scale via the `measure = "GEN"` argument.
The moderator `facing_customer` is specified via the formula syntax using the `mods` argument.
The multilevel structure, i.e., effect sizes nested within studies `study_id`, is specified using the `cluster` argument.
Finally, we specify the RoBMA 3.6.1 equivalent prior distributions in the `prior_*` arguments (omitting them would result in a model fit using the current default prior distributions based on the UISD).

```{r fit-reg, eval = FALSE}
fit_reg <- RoBMA(
  yi = y, sei = se, measure = "GEN",
  mods = ~ facing_customer,
  cluster = study_id,
  prior_effect              = prior_effect_361,
  prior_heterogeneity       = prior_heterogeneity_361,
  prior_mods                = prior_mods_361,
  prior_bias                = prior_bias_361,
  sample = 20000, burnin = 10000, adapt = 10000,
  thin = 5, parallel = TRUE, seed = 1,
  data = Havrankova2025
)
```

The meta-regression can take more than an hour to run with parallel processing enabled.

## Interpreting the Results

The `summary()` function reports inclusion Bayes factors and model parameters.

```{r summary}
summary(fit_reg, include_mcmc_diagnostics = FALSE)
```

The effect-size estimates are obtained with dedicated helpers: `pooled_effect()` for the average effect and `marginal_means()` for the moderator levels.

The multilevel RoBMA meta-regression finds extreme evidence for the presence of the average effect, moderation by the degree of consumer contact, between-study heterogeneity, and publication bias (all BFs > 10,000).

```{r pooled-effect}
pooled_effect(fit_reg)
```

The model-averaged effect estimate is accompanied by considerable overall heterogeneity and a cluster-level allocation close to 1, indicating a high degree of similarity among estimates from the same study.
MCMC diagnostics are included in `summary()` by default, and chain-level checks are available via `plot_diagnostic_trace()`, `plot_diagnostic_density()`, and `plot_diagnostic_autocorrelation()`.

The average effect is moderated by the degree of consumer contact. To examine the effect at each level of the moderator, we use the estimated marginal means.

```{r marginal-means}
mm_reg <- marginal_means(fit_reg)
summary(mm_reg)
```

The estimated marginal means reveal moderate evidence for the absence of an effect for jobs with no consumer contact and extreme evidence for an effect for jobs with some or direct consumer contact.

## Visual Fit Assessment

We visualize model fit using the meta-analytic zplot [@bartos2025zcurve]; see the [*Zplot Publication-Bias Diagnostics*](v36-zplot.html) vignette for more details.
The zplot shows the distribution of the observed $z$-statistics, computed as the effect size divided by the standard error, with dotted red horizontal lines highlighting the typical steps on which publication bias operates.
These are $z = \pm 1.65$ and $z = \pm 1.96$, corresponding to marginally significant and statistically significant $p$-values, and $z = 0$, corresponding to selection on the direction of the effect.

```{r zplot, fig.width = 6, fig.height = 4}
par(mar = c(4, 4, 0, 0))
zplot(fit_reg, by.hist = 0.25, plot_extrapolation = FALSE, from = -4, to = 8)
```

We can notice clear discontinuities corresponding to selection on the direction of the effect and marginal significance.
The posterior predictive density from the multilevel RoBMA meta-regression indicates that the model approximates the observed data reasonably well, capturing the discontinuities and the proportion of statistically significant results.
This supports the model fit even after adjusting for publication bias.

## References
