# Average treatment for Competing risks data

The binreg function does direct binomial regression for one time-point, $$t$$, fitting the model \begin{align*} P(T \leq t, \epsilon=1 | X ) & = \mbox{expit}( X^T beta), \end{align*} the estimation is based on IPCW weighted EE \begin{align*} U(\beta) = & X \left( \Delta(t) I(T \leq t, \epsilon=1 )/G_c(T \wedge t) - \mbox{expit}( X^T beta) \right) = 0 \end{align*} for IPCW adjusted responses and with $$\Delta$$ indicator of death and $$G_c$$ censoring survival distribution. With $$\Delta(t) = I( C > T \wedge t)$$.

The function logitIPCW instead considers \begin{align*} U(\beta) = & X \frac{\Delta(t)}{G_c(T \wedge t)} \left( I(T \leq t, \epsilon=1 ) - \mbox{expit}( X^T beta) \right) = 0. \end{align*} The two score equations are quite similar, and exactly the same when the censoring model is fully-nonparametric given $$X$$.

• It seems that the binreg estimating equations most often the preferable ones to use.

Additional functions logitATE, and binregATE computes the average treatment effect. We demonstrate their use below.

The functions binregATE (recommended) and logitATE also works when there is no censoring and we thus have simple binary outcome.

Variance is based on sandwich formula with IPCW adjustment, and naive.var is variance under known censoring model. The influence functions are stored in the output.

• We estimate the average treatment effect of our binary response $$I(T \leq t, \epsilon=1)$$
• Using a working logistic model for the resonse
• Using a working logistic model for treatment given covariates
• The binregATE can also handle a factor with more than two levels and then uses the mlogit multinomial regression function (of mets).
• Using a working model for censoring given covariates, here a stratified Kaplan-Meier.

If there are no censoring then the censoring weights are simply put to 1.

The average treatment effect is \begin{align*} E(Y(1) - Y(0)) \end{align*} using counterfactual outcomes.

We compute the simple G-estimator \begin{align*} \sum m_a(X_i) \end{align*} to estimate the risk $$E(Y(a))$$.

The DR-estimator instead uses the estimating equations that are double robust wrt

• A working logistic model for the resonse
• A working logistic model for treatment given covariates

This is estimated using the estimator \begin{align*} \sum \left[ \frac{A_i Y_i}{\pi_A(X_i)}-\frac{A_i - \pi_A(X_i)}{\pi_A(X_i)} m_1(X_i) \right] - \left[ \frac{(1-A_i) Y_i}{1-\pi_A(X_i)}+\frac{A_i - \pi_A(X_i)}{1-\pi_A(X_i)} m_0(X_i) \right] \end{align*} where

• $$A_i$$ is treatment indicator
• $$\pi_A(X_i) = P(A_i=1|X_i)$$ is treatment model
• $$Y_i$$ outcome, that in case of censoring is censoring adjusted $$\tilde Y_i \Delta(t) /G_c(T_i- \wedge t)$$
• $$\tilde Y_i = I(T_i \leq t, \epsilon_i=1)$$ oucome before censoring.
• $$m_j(X_i)=P(Y_i=1| A_i=j,X_i)$$ is outcome model, using binomial regression to estimate.

The standard errors are then based on an iid decomposition using taylor-expansion for the parameters of the treatment-model and the outcome-model, and the censoring probability.

Needs that the censoring model is correct, so it can be important to use a sufficiently large censorng model as we also illustrate below.

• Censoring model by strata used for phreg

Also compute standard marginalization for average treatment effect (called differenceG) \begin{align*} \sum \left[ m_1(X_i) - m_0(X_i) \right] \end{align*} and again standard errors are based on the related influcence functions and are also returned.

For large data where there are more than 2 treatment groups the computations can be memeory extensive when there are many covariates due to the multinomial-regression model used for the propensity scores. Otherwise the function (binregATE) will run for rather large data and be rather quick.

# Average treatment effect

First we simulate some data that mimics that of Kumar et al 2012. This is data from multiple myeloma patients treated with allogeneic stem cell transplantation from the Center for International Blood and Marrow Transplant Research (CIBMTR) Kumar et al (2012), “Trends in allogeneic stem cell transplantation for multiple myeloma: a CIBMTR analysis”. The data used in this paper consist of patients transplanted from 1995 to 2005, and we compared the outcomes between transplant periods: 2001-2005 (N=488) versus 1995-2000 (N=375). The two competing events were relapse (cause 2) and treatment-related mortality (TRM, cause 1) defined as death without relapse. considered the following risk covariates: transplant time period (gp (main interest of the study): 1 for transplanted in 2001-2005 versus 0 for transplanted in 1995-2000), donor type (dnr: 1 for Unrelated or other related donor (N=280) versus 0 for HLA-identical sibling (N=584)), prior autologous transplant (preauto: 1 for Auto+Allo transplant (N=399) versus 0 for allogeneic transplant alone (N=465)) and time to transplant (ttt24: 1 for more than 24 months (N=289) versus 0 for less than or equal to 24 months (N=575))).

We here generate similar data by assuming that the two cumlative incidence curves are logistic and we have censoring that depends on the covariates via a Cox model. All this is wrapped in the kumarsim function. The simulation does not deal with possible violations of the bound that $$F_1+F_2 < 1$$. But as we increase the sample size we still see that we recover the parameters of cause 2.

library(mets)
set.seed(100)

n <- 400
kumar <- kumarsim(n,depcens=1)
kumar$cause <- kumar$status
kumar$ttt24 <- kumar[,6] dtable(kumar,~cause) #> #> cause #> 0 1 2 #> 122 126 152 dfactor(kumar) <- gp.f~gp ### censoring model must be adapted to size of data ###c2 <- binregATE(Event(time,cause)~gp.f+dnr+preauto+ttt24,kumar,cause=2, ### treat.model=gp.f~dnr+preauto+ttt24,time=40,cens.model=~strata(gp,dnr,preauto,ttt24)) ###summary(c2) c2 <- binregATE(Event(time,cause)~gp.f+dnr+preauto+ttt24,kumar,cause=2, treat.model=gp.f~dnr+preauto+ttt24,time=40,cens.model=~strata(gp,dnr)) summary(c2) #> #> n events #> 400 144 #> #> 400 clusters #> coeffients: #> Estimate Std.Err 2.5% 97.5% P-value #> (Intercept) -1.05194 0.21327 -1.46993 -0.63395 0.0000 #> gp.f1 0.74736 0.26024 0.23730 1.25741 0.0041 #> dnr 0.24186 0.26393 -0.27544 0.75916 0.3595 #> preauto 0.37081 0.29377 -0.20497 0.94659 0.2069 #> ttt24 0.20367 0.29270 -0.37000 0.77734 0.4865 #> #> exp(coeffients): #> Estimate 2.5% 97.5% #> (Intercept) 0.34926 0.22994 0.5305 #> gp.f1 2.11141 1.26782 3.5163 #> dnr 1.27362 0.75924 2.1365 #> preauto 1.44891 0.81468 2.5769 #> ttt24 1.22589 0.69073 2.1757 #> #> Average Treatment effects (G-formula) : #> Estimate Std.Err 2.5% 97.5% P-value #> treat0 0.328953 0.040459 0.249654 0.408251 0.0000 #> treat1 0.504878 0.041575 0.423393 0.586362 0.0000 #> treat:1-0 0.175925 0.060798 0.056764 0.295087 0.0038 #> #> Average Treatment effects (double robust) : #> Estimate Std.Err 2.5% 97.5% P-value #> treat0 0.312621 0.040994 0.232274 0.392967 0.0000 #> treat1 0.496317 0.043169 0.411707 0.580926 0.0000 #> treat:1-0 0.183696 0.059693 0.066700 0.300692 0.0021 c1 <- binregATE(Event(time,cause)~gp.f+dnr+preauto+ttt24,kumar,cause=2, treat.model=gp.f~dnr+preauto+ttt24,time=60) summary(c1) #> #> n events #> 400 150 #> #> 400 clusters #> coeffients: #> Estimate Std.Err 2.5% 97.5% P-value #> (Intercept) -0.657017 0.234686 -1.116993 -0.197040 0.0051 #> gp.f1 0.102286 0.293157 -0.472291 0.676863 0.7272 #> dnr 0.574571 0.319460 -0.051558 1.200701 0.0721 #> preauto 0.164113 0.302361 -0.428504 0.756730 0.5873 #> ttt24 0.596183 0.326051 -0.042864 1.235230 0.0675 #> #> exp(coeffients): #> Estimate 2.5% 97.5% #> (Intercept) 0.51840 0.32726 0.8212 #> gp.f1 1.10770 0.62357 1.9677 #> dnr 1.77637 0.94975 3.3224 #> preauto 1.17835 0.65148 2.1313 #> ttt24 1.81518 0.95804 3.4392 #> #> Average Treatment effects (G-formula) : #> Estimate Std.Err 2.5% 97.5% P-value #> treat0 0.452073 0.052419 0.349333 0.554813 0.0000 #> treat1 0.476115 0.038011 0.401615 0.550615 0.0000 #> treat:1-0 0.024042 0.068950 -0.111097 0.159181 0.7273 #> #> Average Treatment effects (double robust) : #> Estimate Std.Err 2.5% 97.5% P-value #> treat0 0.432878 0.053526 0.327969 0.537787 0.0000 #> treat1 0.467654 0.040089 0.389081 0.546228 0.0000 #> treat:1-0 0.034777 0.068783 -0.100035 0.169589 0.6131 kumar$int <- interaction(kumar$gp,kumar$dnr)

b5 <- binregATE(Event(time,cause)~int+preauto+ttt24,kumar,cause=2,
treat.model=int~preauto+ttt24,cens.code=0,time=60)
summary(b5)
#>
#>    n events
#>  400    150
#>
#>  400 clusters
#> coeffients:
#>             Estimate  Std.Err     2.5%    97.5% P-value
#> (Intercept) -0.72864  0.26508 -1.24819 -0.20909  0.0060
#> int1.0       0.20908  0.32235 -0.42272  0.84088  0.5166
#> int0.1       0.77385  0.52369 -0.25256  1.80027  0.1395
#> int1.1       0.62077  0.42765 -0.21741  1.45896  0.1466
#> preauto      0.16950  0.30222 -0.42285  0.76184  0.5749
#> ttt24        0.62752  0.33149 -0.02218  1.27722  0.0584
#>
#> exp(coeffients):
#>             Estimate    2.5%  97.5%
#> (Intercept)  0.48256 0.28702 0.8113
#> int1.0       1.23254 0.65526 2.3184
#> int0.1       2.16811 0.77681 6.0513
#> int1.1       1.86037 0.80460 4.3015
#> preauto      1.18471 0.65518 2.1422
#> ttt24        1.87296 0.97806 3.5867
#>
#> Average Treatment effects (G-formula) :
#>                Estimate   Std.Err      2.5%     97.5% P-value
#> treat0.0       0.396357  0.058181  0.282324  0.510390  0.0000
#> treat1.0       0.445884  0.046086  0.355558  0.536210  0.0000
#> treat0.1       0.582308  0.110770  0.365204  0.799412  0.0000
#> treat1.1       0.545665  0.079180  0.390474  0.700856  0.0000
#> treat:1.0-0.0  0.049527  0.075967 -0.099367  0.198420  0.5144
#> treat:0.1-0.0  0.185951  0.123715 -0.056526  0.428428  0.1328
#> treat:1.1-0.0  0.149308  0.103236 -0.053030  0.351646  0.1481
#> treat:0.1-1.0  0.136424  0.123217 -0.105078  0.377926  0.2682
#> treat:1.1-1.0  0.099781  0.095044 -0.086501  0.286063  0.2938
#> treat:1.1-0.1 -0.036643  0.139744 -0.310536  0.237249  0.7932
#>
#> Average Treatment effects (double robust) :
#>                Estimate   Std.Err      2.5%     97.5% P-value
#> treat0.0       0.343660  0.056787  0.232359  0.454960  0.0000
#> treat1.0       0.463458  0.059948  0.345962  0.580954  0.0000
#> treat0.1       0.622022  0.226104  0.178866  1.065178  0.0059
#> treat1.1       0.563471  0.180951  0.208813  0.918129  0.0018
#> treat:1.0-0.0  0.119798  0.083150 -0.043174  0.282770  0.1497
#> treat:0.1-0.0  0.278362  0.233179 -0.178661  0.735384  0.2326
#> treat:1.1-0.0  0.219811  0.190384 -0.153334  0.592957  0.2483
#> treat:0.1-1.0  0.158564  0.247795 -0.327106  0.644233  0.5222
#> treat:1.1-1.0  0.100013  0.199201 -0.290413  0.490440  0.6156
#> treat:1.1-0.1 -0.058550  0.254591 -0.557539  0.440438  0.8181

We note that correct estimates that found using the large censoring model are very different from those using the simple Kaplan-Meier weights that are severely biased for these data. This is due to a stong censoring dependence.

The average treatment is around $$0.17 = E(Y(1) - Y(0))$$ at time 60 for the transplant period, under the standard causal assumptions. We here use the logistic model and a treat model that is also logistic. The 1/0 variable used for the causal computation is found as the rhs of the treat.model.

# Average treatment effect for binary or continuous responses

In the binary case a binary outcome is specified instead of the survival outcome, and as a consequence no-censoring adjustment is done

• the binary/numeric outcome must be a variable in the data-frame

First binnary outcome (can also use binregATE koding cause without censorings values, so setting cens.code=2, and time large)

kumar$cause2 <- 1*(kumar$cause==2)

b3 <- logitATE(cause2~gp.f+dnr+preauto+ttt24,kumar,treat.model=gp.f~dnr+preauto+ttt24)
summary(b3)
#>
#>    n events
#>  400    400
#>
#>  400 clusters
#> coeffients:
#>                Estimate     Std.Err        2.5%       97.5% P-value
#> (Intercept) -1.07282963  0.19388190 -1.45283117 -0.69282808  0.0000
#> gp.f1        0.26229718  0.23212023 -0.19265012  0.71724448  0.2585
#> dnr          0.47162640  0.24030425  0.00063871  0.94261408  0.0497
#> preauto      0.26934293  0.24156220 -0.20411028  0.74279614  0.2648
#> ttt24        0.40066829  0.24437556 -0.07829902  0.87963559  0.1011
#>
#> exp(coeffients):
#>             Estimate    2.5%  97.5%
#> (Intercept)  0.34204 0.23391 0.5002
#> gp.f1        1.29991 0.82477 2.0488
#> dnr          1.60260 1.00064 2.5667
#> preauto      1.30910 0.81537 2.1018
#> ttt24        1.49282 0.92469 2.4100
#>
#> Average Treatment effects (G-formula) :
#>            Estimate   Std.Err      2.5%     97.5% P-value
#> treat0     0.345704  0.037959  0.271306  0.420102  0.0000
#> treat1     0.404802  0.033367  0.339404  0.470200  0.0000
#> treat:1-0  0.059098  0.052111 -0.043037  0.161232  0.2568
#>
#> Average Treatment effects (double robust) :
#>            Estimate   Std.Err      2.5%     97.5% P-value
#> treat0     0.331726  0.039466  0.254375  0.409078  0.0000
#> treat1     0.398074  0.034407  0.330637  0.465511  0.0000
#> treat:1-0  0.066348  0.052260 -0.036080  0.168775  0.2042

## calculate also relative risk
estimate(coef=b3$riskDR,vcov=b3$var.riskDR,f=function(p) p[1]/p[2])
#>        Estimate Std.Err   2.5% 97.5%   P-value
#> treat0   0.8333  0.1223 0.5936 1.073 9.589e-12

Or with continuous response using normal estimating equations

b3 <- normalATE(time~gp.f+dnr+preauto+ttt24,kumar,treat.model=gp.f~dnr+preauto+ttt24)
summary(b3)
#>
#>    n events
#>  400    400
#>
#>  400 clusters
#> coeffients:
#>              Estimate   Std.Err      2.5%     97.5% P-value
#> (Intercept)  27.80015   3.30304  21.32631  34.27399  0.0000
#> gp.f1       -16.06100   3.25515 -22.44098  -9.68102  0.0000
#> dnr          -6.51649   3.16550 -12.72075  -0.31223  0.0395
#> preauto       1.66745   2.93033  -4.07590   7.41080  0.5693
#> ttt24        -1.12151   3.27807  -7.54640   5.30338  0.7323
#>
#> exp(coeffients):
#>               Estimate       2.5%      97.5%
#> (Intercept) 1.1843e+12 1.8277e+09 7.6737e+14
#> gp.f1       1.0588e-07 1.7948e-10 1.0000e-04
#> dnr         1.4788e-03 2.9885e-06 7.3180e-01
#> preauto     5.2986e+00 1.6977e-02 1.6537e+03
#> ttt24       3.2579e-01 5.2801e-04 2.0102e+02
#>
#> Average Treatment effects (G-formula) :
#>           Estimate  Std.Err     2.5%    97.5% P-value
#> treat0     26.2456   2.9998  20.3660  32.1251       0
#> treat1     10.1846   1.0464   8.1336  12.2355       0
#> treat:1-0 -16.0610   3.2552 -22.4410  -9.6810       0
#>
#> Average Treatment effects (double robust) :
#>           Estimate  Std.Err     2.5%    97.5% P-value
#> treat0     28.0289   3.2124  21.7328  34.3251       0
#> treat1     10.7094   1.0007   8.7481  12.6707       0
#> treat:1-0 -17.3195   3.3652 -23.9153 -10.7238       0

# SessionInfo

sessionInfo()
#> R version 4.3.2 (2023-10-31)
#> Platform: aarch64-apple-darwin22.6.0 (64-bit)
#> Running under: macOS Sonoma 14.3.1
#>
#> Matrix products: default
#> BLAS:   /Users/kkzh/.asdf/installs/R/4.3.2/lib/R/lib/libRblas.dylib
#> LAPACK: /Users/kkzh/.asdf/installs/R/4.3.2/lib/R/lib/libRlapack.dylib;  LAPACK version 3.11.0
#>
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#>
#> time zone: Europe/Copenhagen
#> tzcode source: internal
#>
#> attached base packages:
#> [1] stats     graphics  grDevices utils     datasets  methods   base
#>
#> other attached packages:
#> [1] mets_1.3.4     timereg_2.0.5  survival_3.5-7
#>
#> loaded via a namespace (and not attached):
#>  [1] cli_3.6.2           knitr_1.45          rlang_1.1.3
#>  [4] xfun_0.41           jsonlite_1.8.8      listenv_0.9.1
#>  [7] future.apply_1.11.1 lava_1.7.4          htmltools_0.5.6.1
#> [10] sass_0.4.7          rmarkdown_2.25      grid_4.3.2
#> [13] evaluate_0.23       jquerylib_0.1.4     fastmap_1.1.1
#> [16] mvtnorm_1.2-4       yaml_2.3.7          numDeriv_2016.8-1.1
#> [19] compiler_4.3.2      codetools_0.2-19    ucminf_1.2.0
#> [22] Rcpp_1.0.12         future_1.33.1       lattice_0.22-5
#> [25] digest_0.6.34       R6_2.5.1            parallelly_1.37.0
#> [28] parallel_4.3.2      splines_4.3.2       bslib_0.5.1
#> [31] Matrix_1.6-5        tools_4.3.2         globals_0.16.2
#> [34] cachem_1.0.8