Wednesday, March 18, 2026

Bayesian modeling: Past Stata’s built-in fashions


This put up was written collectively with Nikolay Balov, Senior Statistician and Software program Developer, StataCorp.

A query on Statalist motivated us to put in writing this weblog entry.

A consumer requested if the churdle command (http://www.stata.com/stata14/hurdle-models/) for becoming hurdle fashions, new in Stata 14, might be mixed with the bayesmh command (http://www.stata.com/stata14/bayesian-analysis/) for becoming Bayesian fashions, additionally new in Stata 14:

http://www.statalist.org/boards/discussion board/general-stata-discussion/normal/1290426-comibining-bayesmh-and-churdle

Our preliminary response to this query was ‘No’ or, extra exactly, ‘Not simply’—hurdle fashions will not be among the many chance fashions supported by bayesmh. One can write a program to compute the log chance of the double hurdle mannequin and use this program with bayesmh (within the spirit of http://www.stata.com/stata14/bayesian-evaluators/), however this will appear to be a frightening job if you’re not aware of Stata programming.

After which we realized, why not merely name churdle from the evaluator to compute the log chance? All we’d like is for churdle to guage the log chance at particular values of mannequin parameters with out performing iterations. This may be achieved by specifying churdle‘s choices from() and iterate(0).

Let’s have a look at an instance. We think about a easy hurdle mannequin utilizing a subset of the health dataset from [R] churdle:


. webuse health
. set seed 17653
. pattern 10
. churdle linear hours age, choose(commute) ll(0)

Iteration 0:   log chance = -2783.3352
Iteration 1:   log chance =  -2759.029
Iteration 2:   log chance = -2758.9992
Iteration 3:   log chance = -2758.9992

Cragg hurdle regression                         Variety of obs     =      1,983
                                                LR chi2(1)        =       3.42
                                                Prob > chi2       =     0.0645
Log chance = -2758.9992                     Pseudo R2         =     0.0006

------------------------------------------------------------------------------
       hours |      Coef.   Std. Err.      z    P>|z|     [95% Conf. Interval]
-------------+----------------------------------------------------------------
hours        |
         age |   .0051263   .0028423     1.80   0.071    -.0004446    .0106971
       _cons |   1.170932   .1238682     9.45   0.000     .9281548    1.413709
-------------+----------------------------------------------------------------
selection_ll |
     commute |  -.0655171   .1561046    -0.42   0.675    -.3714765    .2404423
       _cons |   .1421166   .0882658     1.61   0.107    -.0308813    .3151144
-------------+----------------------------------------------------------------
lnsigma      |
       _cons |   .1280215     .03453     3.71   0.000      .060344     .195699
-------------+----------------------------------------------------------------
      /sigma |   1.136577    .039246                      1.062202    1.216161
------------------------------------------------------------------------------

Let’s assume for a second that we have already got an evaluator, mychurdle1, that returns the corresponding log-likelihood worth. We are able to match a Bayesian hurdle mannequin utilizing bayesmh as follows:


. gen byte hours0 = (hours==0) //dependent variable for the choice equation
. set seed 123
. bayesmh (hours age) (hours0 commute),
        llevaluator(mychurdle1, parameters({lnsig}))
        prior({hours:} {hours0:} {lnsig}, flat)
        saving(sim, exchange) dots

(output omitted)

We use a two-equation specification to suit this mannequin. The primary regression is specified first, and the choice regression is specified subsequent. The extra parameter, log of the usual deviation related to the primary regression, is laid out in llevaluator()‘s suboption parameters(). All parameters are assigned flat priors to acquire outcomes just like churdle. MCMC outcomes are saved in sim.dta.

Right here is the precise output from bayesmh:


. bayesmh (hours age) (hours0 commute), llevaluator(mychurdle1, parameters({lns
> ig})) prior({hours:} {hours0:} {lnsig}, flat) saving(sim, exchange) dots

Burn-in 2500 aaaaaaaaa1000aaaaaa...2000..... achieved
Simulation 10000 .........1000.........2000.........3000.........4000.........5
> 000.........6000.........7000.........8000.........9000.........10000 achieved

Mannequin abstract
------------------------------------------------------------------------------
Chance:
  hours hours0 ~ mychurdle1(xb_hours,xb_hours0,{lnsig})

Priors:
       {hours:age _cons} ~ 1 (flat)                                        (1)
  {hours0:commute _cons} ~ 1 (flat)                                        (2)
                 {lnsig} ~ 1 (flat)
------------------------------------------------------------------------------
(1) Parameters are components of the linear kind xb_hours.
(2) Parameters are components of the linear kind xb_hours0.

Bayesian regression                              MCMC iterations  =     12,500
Random-walk Metropolis-Hastings sampling         Burn-in          =      2,500
                                                 MCMC pattern measurement =     10,000
                                                 Variety of obs    =      1,983
                                                 Acceptance fee  =      .2889
                                                 Effectivity:  min =     .05538
                                                              avg =     .06266
Log marginal chance = -2772.3953                          max =     .06945

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
hours        |
         age |  .0050916   .0027972   .000106   .0049923  -.0000372   .0107231
       _cons |  1.167265    .124755   .004889   1.175293   .9125105   1.392021
-------------+----------------------------------------------------------------
hours0       |
     commute | -.0621282   .1549908   .006585  -.0613511  -.3623891   .2379805
       _cons |  .1425693   .0863626   .003313   .1430396  -.0254507   .3097677
-------------+----------------------------------------------------------------
       lnsig |  .1321532   .0346446   .001472   .1326704   .0663646   .2015249
------------------------------------------------------------------------------

file sim.dta saved

The outcomes are just like these produced by churdle, as one would anticipate with noninformative priors.

If desired, we are able to use bayesstats abstract to acquire the estimate of the usual deviation:


. bayesstats abstract (sigma: exp({lnsig}))

Posterior abstract statistics                      MCMC pattern measurement =    10,000

       sigma : exp({lnsig})

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
       sigma |  1.141969   .0396264   .001685   1.141874   1.068616   1.223267
------------------------------------------------------------------------------

 
Let’s now discuss in additional element a couple of log-likelihood evaluator. We’ll think about two evaluators: one utilizing churdle and one immediately implementing the log chance of the thought of hurdle mannequin.

 

Log-likelihood evaluator utilizing churdle

 

Right here we exhibit easy methods to write a log-likelihood evaluator that calls an current Stata estimation command, churdle in our instance, to compute the log chance.


program mychurdle1
        model 14.0
        args llf
        tempname b
        mat `b' = ($MH_b, $MH_p)
        seize churdle linear $MH_y1 $MH_y1x1 if $MH_touse, ///
                    choose($MH_y2x1) ll(0) from(`b') iterate(0)
        if _rc {
                if (_rc==1) { // deal with break key
                        exit _rc
                }
                scalar `llf' = .
        }
        else {
                scalar `llf' = e(ll)
        }
finish

The mychurdle1 program returns the log-likelihood worth computed by churdle on the present values of mannequin parameters. This program accepts one argument — a short lived scalar to comprise the log-likelihood worth llf. We saved present values of mannequin parameters (regression coefficients from two equations saved in vector MH_b and the additional parameter, log standard-deviation, saved in vector MH_p) in a short lived matrix b. We specified churdle‘s choices from() and iterate(0) to guage the log chance on the present parameter values. Lastly, we saved the ensuing log-likelihood worth in llf (or lacking worth if the command failed to guage the log chance).

 

Log-likelihood evaluator immediately computing log chance

 

Right here we exhibit easy methods to write a log-likelihood evaluator that computes the chance of the fitted hurdle mannequin immediately reasonably than calling churdle.


program mychurdle2
        model 14.0
        args lnf xb xg lnsig
        tempname sig
        scalar `sig' = exp(`lnsig')
        tempvar lnfj
        qui gen double `lnfj' = regular(`xg')  if $MH_touse
        qui exchange `lnfj'    = log(1 - `lnfj') if $MH_y1 <= 0 & $MH_touse
        qui exchange `lnfj'    = log(`lnfj') - log(regular(`xb'/`sig'))   ///
                              + log(normalden($MH_y1,`xb',`sig'))       ///
                                if $MH_y1 > 0 & $MH_touse
        summarize `lnfj' if $MH_touse, meanonly
        if r(N) < $MH_n {
            scalar `lnf' = .
            exit
        }
        scalar `lnf' = r(sum)
finish

The mychurdle2 program accepts 4 arguments: a short lived scalar to comprise the log-likelihood worth llf, momentary variables xb and xg containing linear predictors from the corresponding fundamental and choice equations evaluated on the present values of mannequin parameters, and momentary scalar lnsig containing the present worth of the log standard-deviation parameter. We compute and retailer the observation-level log chance in a short lived variable lnfj. World MH_y1 comprises the identify of the dependent variable from the primary (fundamental) equation, and international MH_touse marks the estimation pattern. If all observation-specific log chance contributions are nonmissing, we retailer the general log-likelihood worth in lnf or we in any other case retailer lacking.

We match our mannequin utilizing the identical syntax as earlier, besides we use mychurdle2 as this system evaluator.


. set seed 123
. bayesmh (hours age) (hours0 commute), llevaluator(mychurdle2, parameters({lns
> ig})) prior({hours:} {hours0:} {lnsig}, flat) saving(sim, exchange) dots

Burn-in 2500 aaaaaaaaa1000aaaaaa...2000..... achieved
Simulation 10000 .........1000.........2000.........3000.........4000.........5
> 000.........6000.........7000.........8000.........9000.........10000 achieved

Mannequin abstract
------------------------------------------------------------------------------
Chance:
  hours hours0 ~ mychurdle2(xb_hours,xb_hours0,{lnsig})

Priors:
       {hours:age _cons} ~ 1 (flat)                                        (1)
  {hours0:commute _cons} ~ 1 (flat)                                        (2)
                 {lnsig} ~ 1 (flat)
------------------------------------------------------------------------------
(1) Parameters are components of the linear kind xb_hours.
(2) Parameters are components of the linear kind xb_hours0.

Bayesian regression                              MCMC iterations  =     12,500
Random-walk Metropolis-Hastings sampling         Burn-in          =      2,500
                                                 MCMC pattern measurement =     10,000
                                                 Variety of obs    =      1,983
                                                 Acceptance fee  =      .2889
                                                 Effectivity:  min =     .05538
                                                              avg =     .06266
Log marginal chance = -2772.3953                          max =     .06945

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
hours        |
         age |  .0050916   .0027972   .000106   .0049923  -.0000372   .0107231
       _cons |  1.167265    .124755   .004889   1.175293   .9125105   1.392021
-------------+----------------------------------------------------------------
hours0       |
     commute | -.0621282   .1549908   .006585  -.0613511  -.3623891   .2379805
       _cons |  .1425693   .0863626   .003313   .1430396  -.0254507   .3097677
-------------+----------------------------------------------------------------
       lnsig |  .1321532   .0346446   .001472   .1326704   .0663646   .2015249
------------------------------------------------------------------------------

file sim.dta not discovered; file saved

We receive the identical outcomes as these obtained utilizing method 1, and we receive them a lot quicker.

 

Last remarks

 

Strategy 1 may be very easy. It may be utilized to any Stata command that returns the log chance and lets you specify parameter values at which this log chance have to be evaluated. With out an excessive amount of programming effort, you should utilize virtually any current Stata most chance estimation command with bayesmh. A drawback of method 1 is slower execution in contrast with programming the chance immediately, as in method 2. For instance, the command utilizing the mychurdle1 evaluator from method 1 took about 25 minutes to run, whereas the command utilizing the mychurdle2 evaluator from method 2 took solely 20 seconds.



Related Articles

Latest Articles