Hierarchical Bayesian models

library(serosv)

Parametric Bayesian framework

Currently, serosv only has models under parametric Bayesian framework

Proposed approach

Prevalence has a parametric form π(ai, α) where α is a parameter vector

One can constraint the parameter space of the prior distribution P(α) in order to achieve the desired monotonicity of the posterior distribution P(π1, π2, ..., πm|y, n)

Where:

  • n = (n1, n2, ..., nm) and ni is the sample size at age ai
  • y = (y1, y2, ..., ym) and yi is the number of infected individual from the ni sampled subjects

Farrington

Refer to Chapter 10.3.1

Proposed model

The model for prevalence is as followed

$$ \pi (a) = 1 - exp\{ \frac{\alpha_1}{\alpha_2}ae^{-\alpha_2 a} + \frac{1}{\alpha_2}(\frac{\alpha_1}{\alpha_2} - \alpha_3)(e^{-\alpha_2 a} - 1) -\alpha_3 a \} $$

For likelihood model, independent binomial distribution are assumed for the number of infected individuals at age ai

yi ∼ Bin(ni, πi),  for i = 1, 2, 3, ...m

The constraint on the parameter space can be incorporated by assuming truncated normal distribution for the components of α, α = (α1, α2, α3) in πi = π(ai, α)

αj ∼ truncated 𝒩(μj, τj),  j = 1, 2, 3

The joint posterior distribution for α can be derived by combining the likelihood and prior as followed

$$ P(\alpha|y) \propto \prod^m_{i=1} \text{Bin}(y_i|n_i, \pi(a_i, \alpha)) \prod^3_{i=1}-\frac{1}{\tau_j}\text{exp}(\frac{1}{2\tau^2_j} (\alpha_j - \mu_j)^2) $$

  • Where the flat hyperprior distribution is defined as followed:

    • μj ∼ 𝒩(0, 10000)

    • τj−2 ∼ Γ(100, 100)

The full conditional distribution of αi is thus $$ P(\alpha_i|\alpha_j,\alpha_k, k, j \neq i) \propto -\frac{1}{\tau_i}\text{exp}(\frac{1}{2\tau^2_i} (\alpha_i - \mu_i)^2) \prod^m_{i=1} \text{Bin}(y_i|n_i, \pi(a_i, \alpha)) $$

Fitting data

To fit Farrington model, use hierarchical_bayesian_model() and define type = "far2" or type = "far3" where

  • type = "far2" refers to Farrington model with 2 parameters (α3 = 0)

  • type = "far3" refers to Farrington model with 3 parameters (α3 > 0)

df <- mumps_uk_1986_1987
model <- hierarchical_bayesian_model(df, type="far3")
#> 
#> SAMPLING FOR MODEL 'fra_3' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 4.5e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.45 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 5000 [  0%]  (Warmup)
#> Chain 1: Iteration:  500 / 5000 [ 10%]  (Warmup)
#> Chain 1: Iteration: 1000 / 5000 [ 20%]  (Warmup)
#> Chain 1: Iteration: 1500 / 5000 [ 30%]  (Warmup)
#> Chain 1: Iteration: 1501 / 5000 [ 30%]  (Sampling)
#> Chain 1: Iteration: 2000 / 5000 [ 40%]  (Sampling)
#> Chain 1: Iteration: 2500 / 5000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 3000 / 5000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 3500 / 5000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 4000 / 5000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 4500 / 5000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 5000 / 5000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 5.543 seconds (Warm-up)
#> Chain 1:                1.235 seconds (Sampling)
#> Chain 1:                6.778 seconds (Total)
#> Chain 1:
#> Warning: There were 2529 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problems
#> Warning: The largest R-hat is 2.3, indicating chains have not mixed.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#r-hat
#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#bulk-ess
#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#tail-ess

model$info
#>                       mean      se_mean           sd          2.5%
#> alpha1        1.410319e-01 7.594882e-04 3.476956e-03  1.334373e-01
#> alpha2        2.018899e-01 3.630412e-04 4.034540e-03  1.913307e-01
#> alpha3        7.614542e-03 4.716852e-04 3.799144e-03  2.177092e-03
#> tau_alpha1    3.841894e+00 2.382230e+00 3.910963e+00  6.090959e-05
#> tau_alpha2    2.371888e-02 4.672946e-03 9.169060e-02  5.450346e-06
#> tau_alpha3    1.806180e+00 9.400338e-01 2.567886e+00  3.780248e-05
#> mu_alpha1    -1.047234e-01 6.450957e-01 1.696985e+01 -2.483131e+01
#> mu_alpha2    -2.868934e+01 1.910086e+01 5.657776e+01 -1.239552e+02
#> mu_alpha3    -9.093217e-02 1.514612e+00 2.482064e+01 -3.645391e+01
#> sigma_alpha1  2.533010e+01 1.129414e+01 2.842510e+02  3.286808e-01
#> sigma_alpha2  9.353287e+01 5.930267e+01 1.583945e+02  2.239281e+00
#> sigma_alpha3  8.833304e+01 5.155492e+01 1.589376e+03  3.672553e-01
#> lp__         -2.531559e+03 1.686055e+00 3.651332e+00 -2.541276e+03
#>                        25%           50%           75%         97.5%      n_eff
#> alpha1        1.394263e-01  1.394263e-01  1.432506e-01  1.481497e-01  20.958316
#> alpha2        2.012914e-01  2.018139e-01  2.019202e-01  2.097457e-01 123.502655
#> alpha3        5.279795e-03  8.166077e-03  8.259968e-03  1.755877e-02  64.873450
#> tau_alpha1    6.280051e-02  2.519425e+00  9.256595e+00  9.256595e+00   2.695257
#> tau_alpha2    8.973431e-05  7.127719e-03  1.408183e-02  1.994427e-01 385.006548
#> tau_alpha3    2.585089e-01  3.807306e-01  3.208924e+00  7.414193e+00   7.462160
#> mu_alpha1    -6.803708e-01 -2.153163e-01  4.413135e-01  2.267290e+01 692.002334
#> mu_alpha2    -1.052238e+02  4.749706e+00  6.565663e+00  3.404348e+01   8.773762
#> mu_alpha3    -2.497802e-01  6.487872e-01  1.428044e+00  4.146269e+01 268.548667
#> sigma_alpha1  3.286808e-01  6.300127e-01  3.990418e+00  1.281321e+02 633.428071
#> sigma_alpha2  8.428880e+00  1.184472e+01  1.055652e+02  4.283394e+02   7.133974
#> sigma_alpha3  5.582392e-01  1.620657e+00  1.966809e+00  1.626470e+02 950.414227
#> lp__         -2.533785e+03 -2.528957e+03 -2.528957e+03 -2.527860e+03   4.689852
#>                   Rhat
#> alpha1       1.2391423
#> alpha2       1.0052161
#> alpha3       1.0271622
#> tau_alpha1   3.1489996
#> tau_alpha2   1.0104383
#> tau_alpha3   1.4125130
#> mu_alpha1    0.9997259
#> mu_alpha2    1.5006304
#> mu_alpha3    1.0020218
#> sigma_alpha1 1.0074424
#> sigma_alpha2 1.3331023
#> sigma_alpha3 1.0026808
#> lp__         1.9412126
plot(model)
#> Warning: No shared levels found between `names(values)` of the manual scale and the
#> data's fill values.

Log-logistic

Proposed approach

The model for seroprevalence is as followed

$$ \pi(a) = \frac{\beta a^\alpha}{1 + \beta a^\alpha}, \text{ } \alpha, \beta > 0 $$

The likelihood is specified to be the same as Farrington model (yi ∼ Bin(ni, πi)) with

logit(π(a)) = α2 + α1log (a)

  • Where α2 = log(β)

The prior model of α1 is specified as α1 ∼ truncated 𝒩(μ1, τ1) with flat hyperprior as in Farrington model

β is constrained to be positive by specifying α2 ∼ 𝒩(μ2, τ2)

The full conditional distribution of α1 is thus

$$ P(\alpha_1|\alpha_2) \propto -\frac{1}{\tau_1} \text{exp} (\frac{1}{2 \tau_1^2} (\alpha_1 - \mu_1)^2) \prod_{i=1}^m \text{Bin}(y_i|n_i,\pi(a_i, \alpha_1, \alpha_2) ) $$

And α2 can be derived in the same way

Fitting data

To fit Log-logistic model, use hierarchical_bayesian_model() and define type = "log_logistic"

df <- rubella_uk_1986_1987
model <- hierarchical_bayesian_model(df, type="log_logistic")
#> 
#> SAMPLING FOR MODEL 'log_logistic' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 1.1e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.11 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 5000 [  0%]  (Warmup)
#> Chain 1: Iteration:  500 / 5000 [ 10%]  (Warmup)
#> Chain 1: Iteration: 1000 / 5000 [ 20%]  (Warmup)
#> Chain 1: Iteration: 1500 / 5000 [ 30%]  (Warmup)
#> Chain 1: Iteration: 1501 / 5000 [ 30%]  (Sampling)
#> Chain 1: Iteration: 2000 / 5000 [ 40%]  (Sampling)
#> Chain 1: Iteration: 2500 / 5000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 3000 / 5000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 3500 / 5000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 4000 / 5000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 4500 / 5000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 5000 / 5000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 0.628 seconds (Warm-up)
#> Chain 1:                0.689 seconds (Sampling)
#> Chain 1:                1.317 seconds (Total)
#> Chain 1:
#> Warning: There were 397 divergent transitions after warmup. See
#> https://mc-stan.org/misc/warnings.html#divergent-transitions-after-warmup
#> to find out why this is a problem and how to eliminate them.
#> Warning: Examine the pairs() plot to diagnose sampling problems

model$type
#> [1] "log_logistic"
plot(model)
#> Warning: No shared levels found between `names(values)` of the manual scale and the
#> data's fill values.