Meta-Analytic-Predictive Priors for Variances

Sebastian Weber

2019-04-04

Applying the meta-analytic-predictive (MAP) prior approach to historical data on variances has been suggested in [1]. The utility is a better informed planning of future trials which use a normal endpoint. For these reliable information on the sampling standard deviation is crucial for planning the trial.

Under a normal sampling distribution the (standard) unbiased variance estimator for a sample \(y_j\) of size \(n_j\) is

\[ s^2_j = \frac{1}{n_j-1} \sum_{i=1}^{n_j} (y_{j,i} - \bar{y}_j)^2, \]

which follows a \(\chi^2_\nu\) distribution with \(\nu_j = n_j-1\) degrees of freedom. The \(\chi^2_\nu\) can be rewritten as a \(\Gamma\) distribution

\[ s^2_j|\nu_j,\sigma_j \sim \Gamma(s^2_j|\nu_j/2, \nu_j/(2\,\sigma^2_j)), \]

where \(\sigma_j\) is the (unknown) sampling standard deviation for the data \(y_j\).

While this is not directly supported in RBesT, a normal approximation of the \(\log\) transformed \(\Gamma\) variate can be applied. When \(\log\) transforming a \(\Gamma(\alpha,\beta)\) variate it’s moment and variance can analytically be shown to be (see [2], for example)

\[ E[\log(X)] = \psi(\alpha) - \log(\beta)\] \[ Var[\log(X)] = \psi^{(1)}(\alpha).\]

Here, \(\psi(x)\) is the digamma function and \(\psi^{(1)}(x)\) is the polygamma function of order 1 (first derivative of the \(\log\) of the \(\Gamma\) function).

Thus, by approximating the \(\log\) transformed \(\Gamma\) distribution with a normal approximation, we can apply gMAP as if we were using a normal endpoint. This approximations becomes more accurate, the larger the degrees of freedom are. The section at the bottom of this vignette discusses this approximation accuracy and concludes that independent of the true \(\sigma\) value for 10 observations the approxmation is useful and a very good one for more than 20 observations.

In the following we reanalyze the main example of reference [1] which is shown in table 2:

study sd df
1 12.11 597
2 10.97 60
3 10.94 548
4 9.41 307
5 10.97 906
6 10.95 903

Using the above equations (and using plug-in estimates for \(\sigma_j\)) this translates into an approximate normal distribution for the \(\log\) variance as:

hdata <- mutate(hdata,
                alpha=df/2,
                beta=alpha/sd^2,
                logvar_mean=digamma(alpha) - log(beta),
                logvar_var=psigamma(alpha,1))
study sd df alpha beta logvar_mean logvar_var
1 12.11 597 298.5 2.0354 4.9864 0.0034
2 10.97 60 30.0 0.2493 4.7736 0.0339
3 10.94 548 274.0 2.2894 4.7830 0.0037
4 9.41 307 153.5 1.7335 4.4803 0.0065
5 10.97 906 453.0 3.7643 4.7892 0.0022
6 10.95 903 451.5 3.7656 4.7856 0.0022

In order to run the MAP analysis a prior for the heterogeniety parameter \(\tau\) and the intercept \(\beta\) is needed. In reference [3] it is demonstrated that the (approximate) sampling standard deviation of the \(\log\) variance is \(\sqrt{2}\). Thus, a HalfNormal(0,sqrt(2)/2) is a very conservative choice for the between-study heterogeniety parameter. A less conservative choice is HalfNormal(0,sqrt(2)/4), which gives very similar results in this case. For the intercept \(\beta\) a very wide prior is used with a standard deviation of \(100\) which is in line with reference [1]:

map_mc <- gMAP(cbind(logvar_mean, sqrt(logvar_var)) ~ 1 | study, data=hdata,
               tau.dist="HalfNormal", tau.prior=sqrt(2)/2,
               beta.prior=cbind(4.8, 100))


map_mc
## Generalized Meta Analytic Predictive Prior Analysis
## 
## Call:  gMAP(formula = cbind(logvar_mean, sqrt(logvar_var)) ~ 1 | study, 
##     data = hdata, tau.dist = "HalfNormal", tau.prior = sqrt(2)/2, 
##     beta.prior = cbind(4.8, 100))
## 
## Exchangeability tau strata: 1 
## Prediction tau stratum    : 1 
## Maximal Rhat              : 1 
## 
## Between-trial heterogeneity of tau prediction stratum
##  mean    sd  2.5%   50% 97.5% 
## 0.203 0.103 0.076 0.180 0.472 
## 
## MAP Prior MCMC sample
##  mean    sd  2.5%   50% 97.5% 
## 4.770 0.248 4.270 4.770 5.240
summary(map_mc)
## Heterogeneity parameter tau per stratum:
##         mean    sd  2.5%  50% 97.5%
## tau[1] 0.203 0.103 0.076 0.18 0.472
## 
## Regression coefficients:
##             mean    sd 2.5%  50% 97.5%
## (Intercept) 4.77 0.101 4.58 4.77  4.97
## 
## Mean estimate MCMC sample:
##            mean    sd 2.5%  50% 97.5%
## theta_resp 4.77 0.101 4.58 4.77  4.97
## 
## MAP Prior MCMC sample:
##                 mean    sd 2.5%  50% 97.5%
## theta_resp_pred 4.77 0.248 4.27 4.77  5.24
plot(map_mc)$forest_model

In reference [1] the correct \(\Gamma\) likelihood is used in contrast to the approximate normal approach above. Still, the results match very close, even for the outer quantiles.

MAP prior for the sampling standard deviation

While the MAP analysis is performed for the \(\log\) variance, we are actually interested in the MAP of the respective sampling standard deviation. Since the sampling standard deviation is a strictly positive quantity it is suitable to approximate the MCMC posterior of the MAP prior using a mixture of \(\Gamma\) variates, which can be done using RBesT as:

map_mc_post <- as.matrix(map_mc)
sd_trans <- compose(sqrt, exp)
mcmc_intervals(map_mc_post, regex_pars="theta", transformation=sd_trans)

map_sigma_mc <- sd_trans(map_mc_post[,c("theta_pred")])
map_sigma <- automixfit(map_sigma_mc, type="gamma")

plot(map_sigma)$mix

## 95% interval MAP for the sampling standard deviation
summary(map_sigma)
##      mean        sd      2.5%     50.0%     97.5% 
## 10.921731  1.379748  8.404987 10.855371 13.854232

Normal approximation of a \(\log\Gamma\) variate

For a \(\Gamma(y|\alpha, \beta)\) variate \(y\), which is \(\log\) transformed, \(z = \log(y)\), we have by the law of transformations for univariate densities:

\[ y|\alpha,\beta \sim \Gamma(y|\alpha,\beta) \] \[ p(z) = p(y) \, y = p(\exp(z)) \, \exp(z) \] \[ z|\alpha,\beta \sim \log\Gamma(z|\alpha,\beta)\] \[\Leftrightarrow z|\alpha,\beta \sim \Gamma(\exp(z)|\alpha,\beta) \, \exp(z) \]

The first and second moment of \(z\) is then \[ E[z] = \psi(\alpha) - \log(\beta)\] \[ Var[z] = \psi^{(1)}(\alpha).\]

A short simulation demonstrates the above results:

gamma_dist <- mixgamma(c(1, 18, 6))

## logGamma density
dlogGamma <- function(z, a, b, log=FALSE) {
    n <- exp(z)
    if(!log) {
        return(dgamma(n, a, b) * n)
    } else {
        return(dgamma(n, a, b, log=TRUE) + z)
    }
}

a <- gamma_dist[2,1]
b <- gamma_dist[3,1]
m <- digamma(a) - log(b)
v <- psigamma(a,1)

## compare simulated histogram of log transformed Gamma variates to
## analytic density and approximate normal
sim <- rmix(gamma_dist, 1E5)
mcmc_hist(data.frame(logGamma=log(sim)), freq=FALSE, binwidth=0.1) +
    overlay_function(fun=dlogGamma, args=list(a=a,b=b), aes(linetype="LogGamma")) +
    overlay_function(fun=dnorm, args=list(mean=m, sd=sqrt(v)), aes(linetype="NormalApprox"))

We see that for \(\nu=9\) only, the approximation with a normal density is reasonable. However, by comparing as a function of \(\nu\) the \(2.5\)%, \(50\)% and \(97.5\)% quantiles of the correct distribution with the respective approximate distribution we can assess the adequatness of the approximation. The respective R code is accessible via the vignette overview page while here the graphical result is presented for two different \(\sigma\) values:

References

[1] Schmidli, H., et. al, Comp. Stat. and Data Analysis, 2017, 113:100-110
[2] https://en.wikipedia.org/wiki/Gamma_distribution#Logarithmic_expectation_and_variance
[3] Gelman A, et. al, Bayesian Data Analysis. Third edit., 2014., Chapter 4, p. 84

R Session Info

## R version 3.5.1 (2018-07-02)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Ubuntu 16.04.5 LTS
## 
## Matrix products: default
## BLAS: /usr/lib/libblas/libblas.so.3.6.0
## LAPACK: /usr/lib/lapack/liblapack.so.3.6.0
## 
## locale:
## [1] C
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] foreach_1.4.4   forcats_0.3.0   stringr_1.3.1   readr_1.3.0    
##  [5] tidyr_0.8.2     tibble_1.4.2    tidyverse_1.2.1 scales_1.0.0   
##  [9] bindrcpp_0.2.2  ggplot2_3.1.0   purrr_0.2.5     dplyr_0.7.8    
## [13] bayesplot_1.6.0 knitr_1.21      RBesT_1.3-8     Rcpp_1.0.0     
## 
## loaded via a namespace (and not attached):
##  [1] httr_1.4.0           jsonlite_1.6         modelr_0.1.2        
##  [4] StanHeaders_2.18.0-1 Formula_1.2-3        assertthat_0.2.0    
##  [7] highr_0.7            stats4_3.5.1         cellranger_1.1.0    
## [10] yaml_2.2.0           pillar_1.3.0         backports_1.1.3     
## [13] lattice_0.20-38      glue_1.3.0           digest_0.6.18       
## [16] checkmate_1.8.5      rvest_0.3.2          colorspace_1.3-2    
## [19] htmltools_0.3.6      plyr_1.8.4           pkgconfig_2.0.2     
## [22] rstan_2.18.2         broom_0.5.1          haven_2.0.0         
## [25] mvtnorm_1.0-8        processx_3.2.1       generics_0.0.2      
## [28] withr_2.1.2          lazyeval_0.2.1       cli_1.0.1           
## [31] magrittr_1.5         crayon_1.3.4         readxl_1.1.0        
## [34] evaluate_0.12        ps_1.2.1             nlme_3.1-137        
## [37] xml2_1.2.0           pkgbuild_1.0.2       tools_3.5.1         
## [40] loo_2.0.0            prettyunits_1.0.2    hms_0.4.2           
## [43] matrixStats_0.54.0   munsell_0.5.0        callr_3.1.0         
## [46] compiler_3.5.1       rlang_0.3.0.1        grid_3.5.1          
## [49] ggridges_0.5.1       iterators_1.0.10     rstudioapi_0.8      
## [52] labeling_0.3         rmarkdown_1.11       gtable_0.2.0        
## [55] codetools_0.2-15     inline_0.3.15        reshape2_1.4.3      
## [58] R6_2.3.0             gridExtra_2.3        lubridate_1.7.4     
## [61] bindr_0.1.1          stringi_1.2.4        parallel_3.5.1      
## [64] tidyselect_0.2.5     xfun_0.4