---
title: "Robust Bayesian Meta-Analysis"
author: "František Bartoš"
date: "4th of May 2026"
output:
  rmarkdown::html_vignette:
    self_contained: yes
bibliography: ../inst/REFERENCES.bib
csl: ../inst/apa.csl
link-citations: true
vignette: >
  %\VignetteIndexEntry{Robust Bayesian Meta-Analysis}
  %\VignetteEngine{knitr::rmarkdown}
  %\VignetteEncoding{UTF-8}
  %\VignetteEngine{knitr::rmarkdown_notangle}
---

```{r child = "_vignette-nowrap.md", echo = FALSE, eval = TRUE}
```

```{r setup, include = FALSE}
source("vignette-cache.R", local = knitr::knit_global())

cached_fits <- vignette_cache(
  name     = "v21-robust-bayesian-meta-analysis",
  objects  = c("fit_RoBMA", "fit_RoBMA_custom"),
  packages = c("metadat")
)

knitr::opts_chunk$set(
  collapse    = TRUE,
  comment     = "#>",
  eval        = vignette_cache_eval(cached_fits),
  message     = FALSE,
  warning     = FALSE,
  fig.width   = 7,
  fig.height  = 3.5,
  dev         = "png",
  fig.retina  = 3
)
if (.Platform$OS.type == "windows") {
  knitr::opts_chunk$set(dev.args = list(type = "cairo"))
}
```

```{r load-models, include = FALSE}
library("RoBMA")

data("dat.lehmann2018", package = "metadat")
dat <- dat.lehmann2018

vignette_cache_load(cached_fits)
```

```{r precompute-models, include = FALSE, eval = FALSE}
library("RoBMA")

data("dat.lehmann2018", package = "metadat")
dat <- dat.lehmann2018

fit_RoBMA <- RoBMA(
  yi = yi, vi = vi, measure = "SMD",
  data = dat, seed = 1
)

fit_RoBMA_custom <- RoBMA(
  yi = yi, vi = vi, measure = "SMD",
  prior_bias = list(
    prior_PET(
      distribution = "Cauchy",
      parameters   = list(location = 0, scale = 1),
      truncation   = list(lower = 0, upper = Inf)
    ),
    prior_PEESE(
      distribution = "Cauchy",
      parameters   = list(location = 0, scale = 5),
      truncation   = list(lower = 0, upper = Inf)
    ),
    prior_weightfunction(
      "one-sided",
      steps   = c(0.025),
      weights = wf_cumulative(c(1, 1))
    )
  ),
  data = dat, seed = 1
)

vignette_cache_save(cached_fits)
```

In the [*Publication-Bias Adjustment*](v11-metafor-parity-publication-bias.html) vignette, we fit PET, PEESE, and a weight-function selection model on the ego-depletion dataset and obtained three different bias-corrected effect size estimates.
The methods encoded different bias mechanisms: PET and PEESE corrected for the relationship between effect sizes and standard errors or sampling variances, and the selection model adjusted for publication bias operating on p-values.
There is no consensus on which of the three is the right adjustment to apply.
The [*Bayesian Model Averaging*](v20-bayesian-model-averaging.html) vignette highlighted how an analogous fixed-vs-random-effects question can be resolved by combining the candidate models in one ensemble and weighting them by their posterior model probabilities;
the same idea applies here.

Robust Bayesian model-averaged meta-analysis (RoBMA) is the application of that idea to publication-bias adjustment [@maier2020robust; @bartos2021no].
The default `RoBMA()` ensemble averages over the presence vs absence of an effect, the presence vs absence of heterogeneity, and a no-bias spike against several weight-function and PET / PEESE bias models in one fit.
Inclusion Bayes factors quantify evidence for each of the three components, and the model-averaged estimates propagate uncertainty across the bias mechanisms instead of committing to a single one.

In this vignette we fit the default RoBMA ensemble on the same ego-depletion data used in the [*Publication-Bias Adjustment*](v11-metafor-parity-publication-bias.html) vignette, walk through the summary, the marginal posterior weights of the bias components, and the `interpret()` shortcut.
We list the available `model_type` presets and close with a custom ensemble that combines only the three single-model bias adjustments fit one at a time in that vignette.

## Dataset

The example dataset is `dat.lehmann2018` from the `metadat` package: 81 standardized mean differences from the ego-depletion meta-analysis with effect sizes `yi` and sampling variances `vi` already computed.
For the matching single-model bias adjustments and `metafor` package parity calls, see the [*Publication-Bias Adjustment*](v11-metafor-parity-publication-bias.html) vignette.

```{r data, include = TRUE}
library("RoBMA")

data("dat.lehmann2018", package = "metadat")
dat <- dat.lehmann2018
head(dat[, c("Short_Title", "Year", "yi", "vi")])
```

## Default RoBMA Ensemble

`RoBMA()` with default arguments fits the RoBMA-PSMA ensemble [@bartos2021no]: spike-and-slab prior distributions on the effect and heterogeneity, and a 9-component bias mixture (no-bias spike, six weight functions, PET, PEESE).
The default prior distributions on `mu`, `tau`, and the bias parameters match the single-model fits in the [*Publication-Bias Adjustment*](v11-metafor-parity-publication-bias.html) vignette.

```{r fit-RoBMA, eval = FALSE}
fit_RoBMA <- RoBMA(
  yi = yi, vi = vi, measure = "SMD",
  data = dat, seed = 1
)
```
```{r fit-RoBMA-summary}
summary(fit_RoBMA)
```

The *Components* table reports inclusion Bayes factors for the three components.
*Model-averaged estimates* combine all 36 sub-models in the ensemble (effect $\times$ heterogeneity $\times$ bias) weighted by their posterior probabilities; we report the bias parameters only for the bias-adjusted sub-models.

`summary_models()` breaks the mixture prior distributions of the three components down into their individual sub-models and reports their prior weight, posterior weight, and individual inclusion Bayes factor.

```{r summary-models}
summary_models(fit_RoBMA)
```

The posterior model weights show which sub-models the data prefer within each component's mixture.
The bias side of the ensemble concentrates posterior mass on PET (posterior probability $0.77$, individual inclusion BF $\approx 23$) with a smaller share on PEESE; the weight functions and the no-bias spike all receive negligible posterior weight.
The `inclusion BF` column is the Bayes factor for that one sub-model against the rest of its mixture, so it does not need to agree with the component-level inclusion BF reported by `summary()`.

`plot()` overlays the prior and the model-averaged posterior of any model parameter.
On `mu` the spike at zero (right-axis arrow) carries the posterior probability that the effect is null and the slab carries the rest; the grey shapes are the corresponding prior distribution.

```{r plot-RoBMA-mu, fig.width = 6, fig.height = 4, out.width = "75%", fig.align = "center"}
par(mar = c(4, 4, 1, 4))
plot(fit_RoBMA, parameter = "mu", prior = TRUE, xlim = c(-1, 1), lwd = 2)
```

`interpret()` distills the components and the pooled estimates into prose.
With `scope = "all"`, the bias section is included.

```{r interpret-RoBMA}
interpret(fit_RoBMA, scope = "all")
```

## Preset Ensembles

`model_type` selects a preset bias-model ensemble.
Each preset combines the no-bias spike with a different alternative set; the prior weight on "any bias" is $0.5$ in every preset.

| `model_type` | Alternative bias models                                                              |
|:-------------|:-------------------------------------------------------------------------------------|
| `"PSMA"`     | 6 weight functions, PET, PEESE (the default RoBMA-PSMA ensemble [@bartos2021no])     |
| `"PP"`       | PET and PEESE only                                                                   |
| `"6w"`       | 6 weight functions only (no PET / PEESE)                                             |
| `"2w"`       | 2 two-sided weight functions only                                                    |

The PEESE scale is rescaled by $\text{UISD}_{\text{ratio}} = \text{UISD}_{\text{SMD}} / \text{UISD}_{\text{measure}}$ so the prior distribution describes a comparable bias on each effect-size scale (the ratio is $1$ for `measure = "SMD"`).
Bias-adjustment prior distributions are not affected by `rescale_priors`.
Use `print_prior(fit, parameter = "bias")` on a `RoBMA(..., only_priors = TRUE)` fit to inspect the alternative components and prior weights inside any preset:

```{r priors-PP}
fit_priors <- RoBMA(
  yi = yi, vi = vi, measure = "SMD",
  model_type = "PP",
  data = dat, only_priors = TRUE
)
print_prior(fit_priors, parameter = "bias")
```

## Custom Bias Ensembles

Beyond the presets, we can specify the bias model space directly via `prior_bias`.
This is useful when the publication process under study has a specific shape (for example, two-sided selection only or a single weight-function step at a non-default cutoff), or for sensitivity analyses against the preset choice.

The example below builds a 3-model bias ensemble out of the three single-model bias adjustments from the [*Publication-Bias Adjustment*](v11-metafor-parity-publication-bias.html) vignette (default `bPET()`, default `bPEESE()`, and default `bselmodel()`), so the ensemble averages over the same three bias mechanisms that vignette fit one at a time, plus the no-bias spike.

```{r fit-RoBMA-custom, eval = FALSE}
fit_RoBMA_custom <- RoBMA(
  yi = yi, vi = vi, measure = "SMD",
  prior_bias = list(
    prior_PET(
      distribution = "Cauchy",
      parameters   = list(location = 0, scale = 1),
      truncation   = list(lower = 0, upper = Inf)
    ),
    prior_PEESE(
      distribution = "Cauchy",
      parameters   = list(location = 0, scale = 5),
      truncation   = list(lower = 0, upper = Inf)
    ),
    prior_weightfunction(
      "one-sided",
      steps   = c(0.025),
      weights = wf_cumulative(c(1, 1))
    )
  ),
  data = dat, seed = 1
)
```
```{r interpret-RoBMA-custom}
interpret(fit_RoBMA_custom, scope = c("components", "estimates"))
```

The two ensembles agree on all three component inclusion conclusions: strong evidence for heterogeneity, moderate evidence against the effect, and strong evidence in favour of publication bias.
The model-averaged effect on `mu` is essentially zero in both fits with overlapping credible intervals.
The agreement reflects the bias-mixture composition reported by `summary_models()` above: the default ensemble already concentrates posterior weight on PET, so dropping the weight functions that received negligible weight does not move the model-averaged inference.

The prior probability of any bias is $0.75$ in the custom fit and $0.5$ in the default, because the three user-supplied alternative prior distributions all default to `prior_weights = 1` against the no-bias spike's weight of $1$.
The component inclusion Bayes factor is invariant to this asymmetry; pass `prior_weights` explicitly on each prior in `prior_bias` to match a different prior odds split.

## Other Inference Helpers

`RoBMA()` returns an object of class `c("RoBMA", "brma")`, so all `summary()`, `plot()`, `predict()`, `funnel()`, `marginal_means()`, `rstudent()`, `qqnorm()`, and `influence()` methods documented in the [*Bayesian Meta-Analysis*](v02-bayesian-meta-analysis.html) vignette work the same way.
For single-model bias adjustment without averaging see the [*Publication-Bias Adjustment*](v11-metafor-parity-publication-bias.html) vignette; for the model-averaged workflow without publication-bias adjustment see [*Bayesian Model Averaging*](v20-bayesian-model-averaging.html); the prior distributions on `mu`, `tau`, and the moderators are documented in [*Prior Distributions*](v01-prior-distributions.html).

## References
