Friday, October 31, 2025

Bayesian threshold autoregressive fashions – The Stata Weblog


Autoregressive (AR) fashions are a number of the most generally used fashions in utilized economics, amongst different disciplines, due to their generality and ease. Nonetheless, the dynamic traits of actual financial and monetary knowledge can change from one time interval to a different, limiting the applicability of linear time-series fashions. For instance, the change of unemployment charge is a perform of the state of the financial system, whether or not it’s increasing or contracting. A wide range of fashions have been developed that enable time-series dynamics to rely upon the regime of the system they’re a part of. The category of regime-dependent fashions embrace Markov-switching, easy transition, and threshold autoregressive (TAR) fashions.

TAR (Tong 1982) is a category of nonlinear time-series fashions with purposes in econometrics (Hansen 2011), monetary evaluation (Cao and Tsay 1992), and ecology (Tong 2011). TAR fashions enable regime-switching to be triggered by the noticed stage of an end result up to now. The threshold command in Stata gives frequentist estimation of some TAR fashions.

In Stata 17, the bayesmh command helps time-series operators in linear and nonlinear mannequin specs; see [BAYES] bayesmh. On this weblog entry, I wish to present how we will match some Bayesian TAR fashions utilizing the bayesmh command. The examples will even reveal modeling flexibility not doable with the present threshold command.

TAR mannequin definition

Let ({y_t}) be a collection noticed at discrete instances (t). A common TAR mannequin of order (p), TAR((p)), with (ok) regimes has the shape
[
y_t = a_0^{j} + a_1^{j} y_{t-1} + dots + a_p^{j} y_{t-p} + sigma_{j} e_t, quad {rm if} quad
r_{j-1} < z_t le r_{j}
]
the place (z_t) is a threshold variable, (-infty < r_0 < dots < r_k =
infty) are regime thresholds, and (e_t) are unbiased customary usually distributed errors. The (j)th regime has its personal set of autoregressive coefficients ({a_i^j})’s and customary deviations (sigma_j)’s. Totally different regimes are additionally allowed to have totally different orders (p). Within the above equation, this may be indicated by changing (p) with regime-specific (p_j)’s.

In a TAR mannequin, as implied by the definition, structural breaks occur not at sure time factors however are triggered by the magnitude of the brink variable (z_t). It is not uncommon to have (z_t = y_{t-d}), the place (d) is a constructive integer the referred to as the delay parameter. These fashions are referred to as self-exciting TAR (SETAR) and are those I’m illustrating under.

Actual GDP dataset

In Beaudry and Koop (1993), TAR fashions have been used to mannequin gross nationwide product. The authors demonstrated uneven persistence within the progress charge of gross nationwide product, with constructive shocks (related to enlargement durations) being extra persistent than detrimental shocks (recession durations). In an analogous method, I take advantage of the expansion charge of actual gross home product (GDP) of america as an indicator of the standing of the financial system to mannequin the distinction between enlargement and recession durations.

Quarterly observations on actual GDP, measured in billions of {dollars}, are obtained from the Federal Reserve Financial Knowledge repository utilizing the import fred command. I take into account observations solely between the primary quarter of 1947 and second quarter of 2021. A quarterly date variable, dateq, is generated and used with tsset to arrange the time collection.

. import fred GDPC1

Abstract
-------------------------------------------------------------------------------
Sequence ID                    Nobs    Date vary                Frequency
-------------------------------------------------------------------------------
GDPC1                        301     1947-01-01 to 2022-01-01  Quarterly
-------------------------------------------------------------------------------
# of collection imported: 1
   highest frequency: Quarterly
    lowest frequency: Quarterly

. maintain if daten >= date("01jan1947", "DMY") & daten <= 
> date("01apr2021", "DMY")
(3 observations deleted)

. generate dateq = qofd(daten)

. tsset dateq, quarterly

Time variable: dateq, 1947q1 to 2021q2
        Delta: 1 quarter

I’m within the change of actual GDP from one quarter to the following. For this function, I generate a brand new variable, rgdp, to measure this modification in percentages. Constructive values of rgdp point out financial progress or enlargement, whereas values near 0 or detrimental are related to stagnation or recession. Under, I take advantage of the tsline command to plot the time collection. There, a number of the recession durations, indicated by sharp drops in GDP, are seen, together with the most recent one in 2020.

. generate double rgdp = 100*D1.GDPC1/L1.GDPC1
(1 lacking worth generated)

. tsline rgdp

A TAR mannequin with two regimes estimates one threshold worth (r) that may be visualized as a horizontal line separating, considerably informally, enlargement from recession durations. In Bayesian TAR, the brink (r) is a random variable with distribution estimated from a previous and noticed knowledge.

Bayesian TAR specification

Earlier than I present how one can specify a Bayesian TAR mannequin in Stata, let me first match a less complicated Bayesian AR(1) mannequin for rgdp utilizing the bayesmh command. It can function a baseline for comparability with fashions with structural breaks.

I take advantage of the pretty uninformative, given the vary of rgdp, regular(0, 100) prior for the 2 coefficients within the {rgdp:} equation and the igamma(0.01, 0.01) prior for the variance parameter {sig2}. I additionally use Gibbs sampling for extra environment friendly simulation of the mannequin parameters.

. bayesmh rgdp L1.rgdp, chance(regular({sig2}))                
>         prior({rgdp:}, regular(0, 100)) block({rgdp:}, gibbs)    
>         prior({sig2}, igamma(0.01, 0.01)) block({sig2}, gibbs)  
>         rseed(17) dots

Burn-in 2500 .........1000.........2000..... carried out
Simulation 10000 .........1000.........2000.........3000.........4000.........
> 5000.........6000.........7000.........8000.........9000.........10000 carried out

Mannequin abstract
------------------------------------------------------------------------------
Probability:
  rgdp ~ regular(xb_rgdp,{sig2})

Priors:
  {rgdp:L.rgdp _cons} ~ regular(0,100)                                      (1)
               {sig2} ~ igamma(0.01,0.01)
------------------------------------------------------------------------------
(1) Parameters are parts of the linear kind xb_rgdp.

Bayesian regular regression                       MCMC iterations  =     12,500
Gibbs sampling                                   Burn-in          =      2,500
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =        296
                                                 Acceptance charge  =          1
                                                 Effectivity:  min =      .9584
                                                              avg =      .9755
Log marginal-likelihood = -478.07327                          max =          1

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. dev.     MCSE     Median  [95% cred. interval]
-------------+----------------------------------------------------------------
rgdp         |
        rgdp |
         L1. |  .1239926   .0579035   .000591   .1240375   .0096098    .237592
             |
       _cons |  .6762768   .0805277   .000805    .676687   .5186598   .8378632
-------------+----------------------------------------------------------------
        sig2 |  1.344712   .1098713   .001117   1.339746   1.144444   1.571802
------------------------------------------------------------------------------

The posterior imply estimate for the AR(1) coefficient {L1.rgdp} is 0.12, indicating a constructive serial correlation for rgdp. This means a point of persistence in the true GDP progress. The posterior imply estimate of 1.34 for {sig2} suggests a volatility stage above 1 p.c, if the latter is measured by customary deviation. Nonetheless, this straightforward AR(1) mannequin doesn’t inform us how the persistence and volatility change relying on the state of the financial system.

Earlier than persevering with, I save the simulation and estimation outcomes for later reference.

. bayesmh, saving(bar1sim, substitute)
word: file bar1sim.dta saved.

. estimates retailer bar1

To explain a two-state mannequin for the financial system, I wish to specify the best two-regime SETAR mannequin with order of 1 and delay of 1. Within the subsequent part, I focus on the selection of order and delay parameters.

The mannequin may be summarized by the next two equations:
start{align}
{bf rgdp_t} = a_0^{1} + a_1^{1} {bf rgdp_{t-1}} + sigma_{1} e_t, quad {rm if} quad
y_{t-1} < r
{bf rgdp_t} = a_0^{2} + a_1^{2} {bf rgdp_{t-1}} + sigma_{2}
e_t, quad {rm if} quad y_{t-1} ge r
finish{align}

To specify the regression portion of this mannequin with bayesmh, I take advantage of a substitutable expression with conditional logic,

cond(L1.rgdp<{r}, {r1:a0}+{r1:a1}*L1.rgdp, {r2:a0}+{r2:a1}*L1.rgdp)

the place {r1:a0} and {r1:a1} are the coefficients for the primary regime and {r2:a0} and {r2:a1} are the coefficients for the second regime.

The regime-specific variance of the conventional chance may be equally
specified by the expression

cond(L1.rgdp<{r},{sig1},{sig2})

As an alternative of assuming a hard and fast threshold worth for (r), with 0 being a pure selection, I take into account (r) to be a hyperparameter with uniform(-0.5, 0.5) prior. I thus assume that the brink is inside a half proportion level of 0. Given the vary of rgdp and that 0 separates constructive from detrimental progress, this appears to be an affordable assumption. Utilizing uninformative priors for (r) with none restrictions on its vary will make the mannequin unstable due to the potential of collapsing one of many regimes, that’s, having a regime with 0 or only some noticed factors. The priors for coefficients and variances keep the identical as within the earlier mannequin.

. bayesmh rgdp = (cond(L1.rgdp<{r},                               
>                 {r1:a0}+{r1:a1}*L1.rgdp,                        
>                 {r2:a0}+{r2:a1}*L1.rgdp)),                      
>         chance(regular(cond(L1.rgdp<{r}, {sig1}, {sig2})))   
>         prior({r1:}, regular(0, 100)) block({r1:})               
>         prior({r2:}, regular(0, 100)) block({r2:})               
>         prior({sig1}, igamma(0.01, 0.01)) block({sig1})         
>         prior({sig2}, igamma(0.01, 0.01)) block({sig2})         
>         prior({r}, uniform(-0.5, 0.5)) block({r})               
>         rseed(17) init({sig1} {sig2} 1) dots

Burn-in 2500 aaaaaaaaa1000aaaaaaaaa2000aaaaa carried out
Simulation 10000 .........1000.........2000.........3000.........4000.........
> 5000.........6000.........7000.........8000.........9000.........10000 carried out

Mannequin abstract
------------------------------------------------------------------------------
Probability:
  rgdp ~ regular(,)

Priors:
   {r1:a0 a1} ~ regular(0,100)
   {r2:a0 a1} ~ regular(0,100)
  {sig1 sig2} ~ igamma(0.01,0.01)
          {r} ~ uniform(-0.5,0.5)

Expressions:
  expr1 : cond(L1.rgdp<{r},{r1:a0}+{r1:a1}*L1.rgdp,{r2:a0}+{r2:a1}*L1.rgdp)
  expr2 : cond(L1.rgdp<{r},{sig1},{sig2})
------------------------------------------------------------------------------

Bayesian regular regression                       MCMC iterations  =     12,500
Random-walk Metropolis–Hastings sampling         Burn-in          =      2,500
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =        296
                                                 Acceptance charge  =      .3554
                                                 Effectivity:  min =     .04586
                                                              avg =      .1205
Log marginal-likelihood = -415.46111                          max =      .2235

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. dev.     MCSE     Median  [95% cred. interval]
-------------+----------------------------------------------------------------
r1           |
          a0 | -1.327623   .7508926   .020208  -1.284013  -2.794005   .2010012
          a1 | -.8866186   .3328065   .009604  -.8828538  -1.582993  -.2458004
-------------+----------------------------------------------------------------
r2           |
          a0 |  .5521038   .0739373   .002338   .5516799   .4093326   .6992918
          a1 |  .3015663   .0555794   .001702   .3020909   .1883468   .4070365
-------------+----------------------------------------------------------------
           r | -.4834577   .0205573    .00096  -.4903714  -.4995296   -.416813
        sig1 |  6.752686   2.201104   .066593   6.356055   3.684466   12.27644
        sig2 |   .678672   .0580648   .001228    .676249   .5714164   .7989442
------------------------------------------------------------------------------

The mannequin takes lower than a minute to run. There aren’t any apparent convergence issues reported by bayesmh, and the typical sampling effectivity of 12% is nice.

The brink parameter is estimated to be about (-0.48). Though that is near the decrease restrict of (-0.5) set by the prior, it’s nonetheless strictly higher than (-0.5) due to its excessive precision—MCSE is lower than 0.001.

The autoregression coefficients are detrimental within the first regime, r1, and constructive within the second, r2. The second regime thus has a lot greater persistency. Additionally notable is the a lot greater variability within the first regime, about 6.75, as compared with the second, 0.68.

I save the final estimation outcomes and use the bayesstats ic command to check the SETAR(1) and baseline AR(1) fashions.

. bayesmh, saving(bster1sim, substitute)
word: file bster1sim.dta not discovered; file saved.

. estimates retailer bster1

. bayesstats ic bar1 bster1

Bayesian data standards

----------------------------------------------
             |       DIC    log(ML)    log(BF)
-------------+--------------------------------
        bar1 |   929.311  -478.0733          .
      bster1 |  782.0817  -415.4611   62.61216
----------------------------------------------
Observe: Marginal chance (ML) is computed
      utilizing Laplace–Metropolis approximation.

The SETAR(1) mannequin has a decrease DIC and better log-marginal chance than the AR(1) mannequin. After all, we anticipate the extra complicated and versatile SETAR(1) mannequin to offer a greater match based mostly on the chance alone. Observe, nonetheless, that the marginal chance incorporates, along with the chance, the priors on mannequin parameters and thus, not directly, mannequin complexity as nicely.

For comparability, I additionally carry out frequentist estimation of the identical mannequin utilizing the threshold command.

. threshold rgdp, regionvars(l.rgdp) threshvar(l1.rgdp)

Looking for threshold: 1
(working 237 regressions)
..................................................    50
..................................................   100
..................................................   150
..................................................   200
.....................................

Threshold regression
                                                       Variety of obs =     296
Full pattern: 1947q3 via 2021q2                        AIC           = 30.1871
Variety of thresholds = 1                               BIC           = 44.9485
Threshold variable: L.rgdp                             HQIC          = 36.0973

---------------------------------
Order     Threshold           SSR
---------------------------------
    1       -0.3881      319.0398
---------------------------------

------------------------------------------------------------------------------
        rgdp | Coefficient  Std. err.      z    P>|z|     [95% conf. interval]
-------------+----------------------------------------------------------------
Region1      |
        rgdp |
         L1. |  -.7989782     .12472    -6.41   0.000    -1.043425   -.5545315
             |
       _cons |  -.9796166   .2473612    -3.96   0.000    -1.464436   -.4947977
-------------+----------------------------------------------------------------
Region2      |
        rgdp |
         L1. |   .2910881   .0755517     3.85   0.000     .1430094    .4391667
             |
       _cons |   .5663334   .0987315     5.74   0.000     .3728231    .7598436
------------------------------------------------------------------------------

The estimates for regression coefficients are just like the Bayesian ones: detrimental within the first regime and constructive within the second regime. The brink estimate is (-0.39), considerably greater than the posterior imply estimate within the Bayesian mannequin. A limitation of the threshold command is the dearth of error-variance estimates for the 2 regimes.

Autoregression order choice

Within the earlier instance, I match the best SETAR mannequin of order 1 and delay of 1. Basically, these parameters are unknown, and one could not have an excellent prior selection for them. One resolution is to suit fashions of various orders and examine them. A greater resolution is to think about one Bayesian mannequin wherein the orders are included as hyperparameters and are thus estimated together with all different parameters.

The next extension of the earlier Bayesian mannequin considers as choices orders from 1 to 4 for every regime. Two extra discrete hyperparameters, p1 and p2, point out the regime orders. Each regimes are assumed to be no less than of order 1. These hyperparameters thus take values within the set ({1,2,3,4}) in keeping with some prior chances. I take advantage of the index(0.2,0.5,0.2,0.1) previous to set my highest expectation, 0.5, on order 2, then equal chances of 0.2 on orders 1 and three, and eventually chance of 0.1 on order 4. Orders 2, 3, and 4 are turned on and off utilizing indicator variables as multipliers to the coefficients b2, b3, and b4, individually for every regime.

. bayesmh rgdp = (cond(L1.rgdp<{r},                               
>         {r1:a0} + {r1:a1}*L1.rgdp + ({p1}>1)*{r1:b2}*L2.rgdp +  
>         ({p1}>2)*{r1:b3}*L3.rgdp  + ({p1}>3)*{r1:b4}*L4.rgdp,   
>         {r2:a0} + {r2:a1}*L1.rgdp + ({p2}>1)*{r2:b2}*L2.rgdp +  
>         ({p2}>2)*{r2:b3}*L3.rgdp  + ({p2}>3)*{r2:b4}*L4.rgdp)), 
>         chance(regular(cond(L1.rgdp<{r}, {sig1}, {sig2})))   
>         prior({p1}, index(0.2,0.5,0.2,0.1)) block({p1})         
>         prior({p2}, index(0.2,0.5,0.2,0.1)) block({p2})         
>         prior({r1:}, regular(0, 100)) block({r1:})               
>         prior({r2:}, regular(0, 100)) block({r2:})               
>         prior({sig1}, igamma(0.01, 0.01)) block({sig1})         
>         prior({sig2}, igamma(0.01, 0.01)) block({sig2})         
>         prior({r}, uniform(-0.5, 0.5)) block({r})               
>         rseed(17) init({sig1} {sig2} 1 {p1} {p2} 2) dots

Burn-in 2500 aaaaaaaaa1000aaaaaaaaa2000aaaaa carried out
Simulation 10000 .........1000.........2000.........3000.........4000.........
> 5000.........6000.........7000.........8000.........9000.........10000 carried out

Mannequin abstract
------------------------------------------------------------------------------
Probability:
  rgdp ~ regular(,)

Priors:
              {p1 p2} ~ index(0.2,0.5,0.2,0.1)
  {r1:a0 a1 b2 b3 b4} ~ regular(0,100)
  {r2:a0 a1 b2 b3 b4} ~ regular(0,100)
          {sig1 sig2} ~ igamma(0.01,0.01)
                  {r} ~ uniform(-0.5,0.5)

Expressions:
  expr1 : cond(L1.rgdp<{r},{r1:a0} + {r1:a1}*L1.rgdp + ({p1}>1)*{r1:b2}*L2.rgd
          p + ({p1}>2)*{r1:b3}*L3.rgdp + ({p1}>3)*{r1:b4}*L4.rgdp,{r2:a0} +
          {r2:a1}*L1.rgdp + ({p2}>1)*{r2:b2}*L2.rgdp + ({p2}>2)*{r2:b3}*L3.rgd
          p + ({p2}>3)*{r2:b4}*L4.rgdp)
  expr2 : cond(L1.rgdp<{r},{sig1},{sig2})
------------------------------------------------------------------------------

Bayesian regular regression                       MCMC iterations  =     12,500
Random-walk Metropolis–Hastings sampling         Burn-in          =      2,500
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =        293
                                                 Acceptance charge  =      .3534
                                                 Effectivity:  min =      .0167
                                                              avg =     .04996
Log marginal-likelihood = -415.62492                          max =      .2163

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. dev.     MCSE     Median  [95% cred. interval]
-------------+----------------------------------------------------------------
r1           |
          a0 | -1.382746   .7689649   .043474  -1.395605  -2.953167   .0933557
          a1 | -.8994067   .3164027   .021135  -.8966563   -1.53448  -.2834772
          b2 | -.6850185    8.58136   .561206   .0350022  -18.77976   16.92638
          b3 | -1.115146    9.72345   .595084  -.5968546  -20.72936   17.06076
          b4 |  .1783556   10.30925   .572088  -.0286035  -18.84217   21.22304
-------------+----------------------------------------------------------------
r2           |
          a0 |  .4381789   .0809618   .003256   .4369241   .2837359   .6029897
          a1 |  .3064629   .0549879   .002723   .3078107   .1969199    .409214
          b2 |  .1311621   .0531822   .004115   .1343153   .0269019   .2253627
          b3 | -.2968566   9.603545   .515804  -.1204644  -19.08613   18.83395
          b4 | -.9700926   9.811401   .462162  -1.123427  -20.09342   18.72596
-------------+----------------------------------------------------------------
          p1 |    1.1602   .3812486   .018329          1          1          2
          p2 |    1.9858   .1902682   .012683          2          1          2
           r | -.4845632   .0183435   .000932  -.4903276  -.4994973  -.4211905
        sig1 |  6.850306   2.403665   .078689   6.427896   3.629047   12.61115
        sig2 |  .6557395   .0570403   .001226   .6538532   .5533153   .7733018
------------------------------------------------------------------------------

The mannequin takes about 2 minutes to run and has an excellent common sampling effectivity of 5%. Posterior median estimates for the order parameters are 1 for the primary regime and a couple of for the second. We noticed that the primary regime is extra risky. Throughout recessions, having shorter order is in step with having greater volatility.

Observe that the parameters b2, b3, and b4 will not be the precise autocorrelation coefficients for the collection. To summarize the autoregression coefficients for the primary regime, r1, we have to embrace the order indicators for p1 from the mannequin specification.

. bayesstats abstract {r1:a0} {r1:a1} (a2:({p1}>1)*{r1:b2}) 
>         (a3:({p1}>2)*{r1:b3}) (a4:({p1}>3)*{r1:b4})

Posterior abstract statistics                      MCMC pattern dimension =    10,000

          a2 : ({p1}>1)*{r1:b2}
          a3 : ({p1}>2)*{r1:b3}
          a4 : ({p1}>3)*{r1:b4}

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. dev.     MCSE     Median  [95% cred. interval]
-------------+----------------------------------------------------------------
r1           |
          a0 | -1.382746   .7689649   .043474  -1.395605  -2.953167   .0933557
          a1 | -.8994067   .3164027   .021135  -.8966563   -1.53448  -.2834772
-------------+----------------------------------------------------------------
          a2 |  .0517708   .2540154   .013244          0  -.2375046   .9287818
          a3 | -.0014845    .038323   .000658          0          0          0
          a4 |         0          0         0          0          0          0
------------------------------------------------------------------------------

The autocorrelation estimates for orders 2 by means of 4 are very near 0, as we anticipate provided that the estimate for p1 is 1.

Equally, the autoregression coefficients for the second regime have primarily estimates of 0 for orders 3 and 4.

. bayesstats abstract {r2:a0} {r2:a1} (a2:({p2}>1)*{r2:b2}) 
>         (a3:({p2}>2)*{r2:b3}) (a4:({p2}>3)*{r2:b4})

Posterior abstract statistics                      MCMC pattern dimension =    10,000

          a2 : ({p2}>1)*{r2:b2}
          a3 : ({p2}>2)*{r2:b3}
          a4 : ({p2}>3)*{r2:b4}

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. dev.     MCSE     Median  [95% cred. interval]
-------------+----------------------------------------------------------------
r2           |
          a0 |  .4381789   .0809618   .003256   .4369241   .2837359   .6029897
          a1 |  .3064629   .0549879   .002723   .3078107   .1969199    .409214
-------------+----------------------------------------------------------------
          a2 |  .1306131   .0480967   .003019   .1336667          0   .2179723
          a3 | -.0008258   .0089415   .000611          0          0          0
          a4 |         0          0         0          0          0          0
------------------------------------------------------------------------------

The delay (d) is one other essential parameter in SETAR fashions. To date, we thought of a delay of 1 quarter interval, which will not be optimum. Though it’s doable to include (d) as a hyperparameter in a single Bayesian mannequin equally to what I’ve carried out with the order parameters, to keep away from an excessively sophisticated specification, I run three extra fashions with (d=2), (d=3), and (d=4) by utilizing L2.rgdp, L3.rgdp, and L4.rgdp, respectively, as threshold variables and examine them with the mannequin with (d=1).

. bayesmh rgdp = (cond(L2.rgdp<{r},                               
>         {r1:a0} + {r1:a1}*L1.rgdp + ({p1}>1)*{r1:b2}*L2.rgdp +  
>         ({p1}>2)*{r1:b3}*L3.rgdp  + ({p1}>3)*{r1:b4}*L4.rgdp,   
>         {r2:a0} + {r2:a1}*L1.rgdp + ({p2}>1)*{r2:b2}*L2.rgdp +  
>         ({p2}>2)*{r2:b3}*L3.rgdp  + ({p2}>3)*{r2:b4}*L4.rgdp)), 
>         chance(regular(cond(L2.rgdp<{r}, {sig1}, {sig2})))   
>         prior({p1}, index(0.2,0.5,0.2,0.1)) block({p1})         
>         prior({p2}, index(0.2,0.5,0.2,0.1)) block({p2})         
>         prior({r1:}, regular(0, 100)) block({r1:})               
>         prior({r2:}, regular(0, 100)) block({r2:})               
>         prior({sig1}, igamma(0.01, 0.01)) block({sig1})         
>         prior({sig2}, igamma(0.01, 0.01)) block({sig2})         
>         prior({r}, uniform(0, 1)) block({r})                    
>         rseed(17) init({sig1} {sig2} 1 {p1} {p2} 2)             
>         burnin(5000) nomodelsummary notable

Burn-in ...
Simulation ...

Bayesian regular regression                       MCMC iterations  =     15,000
Random-walk Metropolis–Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =        293
                                                 Acceptance charge  =      .3544
                                                 Effectivity:  min =     .01077
                                                              avg =     .05785
Log marginal-likelihood = -453.03074                          max =      .1904

. bayesmh rgdp = (cond(L3.rgdp<{r},                               
>         {r1:a0} + {r1:a1}*L1.rgdp + ({p1}>1)*{r1:b2}*L2.rgdp +  
>         ({p1}>2)*{r1:b3}*L3.rgdp  + ({p1}>3)*{r1:b4}*L4.rgdp,   
>         {r2:a0} + {r2:a1}*L1.rgdp + ({p2}>1)*{r2:b2}*L2.rgdp +  
>         ({p2}>2)*{r2:b3}*L3.rgdp  + ({p2}>3)*{r2:b4}*L4.rgdp)), 
>         chance(regular(cond(L3.rgdp<{r}, {sig1}, {sig2})))   
>         prior({p1}, index(0.2,0.5,0.2,0.1)) block({p1})         
>         prior({p2}, index(0.2,0.5,0.2,0.1)) block({p2})         
>         prior({r1:}, regular(0, 100)) block({r1:})               
>         prior({r2:}, regular(0, 100)) block({r2:})               
>         prior({sig1}, igamma(0.01, 0.01)) block({sig1})         
>         prior({sig2}, igamma(0.01, 0.01)) block({sig2})         
>         prior({r}, uniform(0, 1)) block({r})                    
>         rseed(17) init({sig1} {sig2} 1 {p1} {p2} 2)             
>         burnin(5000) nomodelsummary notable

Burn-in ...
Simulation ...

Bayesian regular regression                       MCMC iterations  =     15,000
Random-walk Metropolis–Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =        293
                                                 Acceptance charge  =       .338
                                                 Effectivity:  min =    .006822
                                                              avg =      .0667
Log marginal-likelihood = -472.66834                          max =      .2068

. bayesmh rgdp = (cond(L4.rgdp<{r},                               
>         {r1:a0} + {r1:a1}*L1.rgdp + ({p1}>1)*{r1:b2}*L2.rgdp +  
>         ({p1}>2)*{r1:b3}*L3.rgdp  + ({p1}>3)*{r1:b4}*L4.rgdp,   
>         {r2:a0} + {r2:a1}*L1.rgdp + ({p2}>1)*{r2:b2}*L2.rgdp +  
>         ({p2}>2)*{r2:b3}*L3.rgdp  + ({p2}>3)*{r2:b4}*L4.rgdp)), 
>         chance(regular(cond(L4.rgdp<{r}, {sig1}, {sig2})))   
>         prior({p1}, index(0.2,0.5,0.2,0.1)) block({p1})         
>         prior({p2}, index(0.2,0.5,0.2,0.1)) block({p2})         
>         prior({r1:}, regular(0, 100)) block({r1:})              
>         prior({r2:}, regular(0, 100)) block({r2:})               
>         prior({sig1}, igamma(0.01, 0.01)) block({sig1})         
>         prior({sig2}, igamma(0.01, 0.01)) block({sig2})         
>         prior({r}, uniform(0, 1)) block({r})                    
>         rseed(17) init({sig1} {sig2} 1 {p1} {p2} 2)             
>         burnin(5000) nomodelsummary notable

Burn-in ...
Simulation ...

Bayesian regular regression                       MCMC iterations  =     15,000
Random-walk Metropolis–Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =        293
                                                 Acceptance charge  =      .3749
                                                 Effectivity:  min =    .003091
                                                              avg =     .03948
Log marginal-likelihood = -484.88072                          max =      .1626

To avoid wasting house, I present solely the estimated log-marginal likelihoods of the fashions,

(d = 1) (d = 2 ) (d = 3 ) (d = 4 )
(-416) ( -453 ) (-473 ) (-485 )


A delay of 1 provides us the very best log-marginal chance, thus validating our preliminary selection.

Closing mannequin

Right here is our ultimate mannequin, which appears to offer one of the best evaluation of the dynamics of rgdp.

. bayesmh rgdp = (cond(L1.rgdp<{r},                               
>                 {r1:a0}+{r1:a1}*L1.rgdp,                        
>                 {r2:a0}+{r2:a1}*L1.rgdp+{r2:a2}*L2.rgdp)),      
>         chance(regular(cond(L1.rgdp<{r}, {sig1}, {sig2})))   
>         prior({r1:}, regular(0, 100)) block({r1:})               
>         prior({r2:}, regular(0, 100)) block({r2:})               
>         prior({sig1}, igamma(0.01, 0.01)) block({sig1})         
>         prior({sig2}, igamma(0.01, 0.01)) block({sig2})         
>         prior({r}, uniform(-0.5, 0.5)) block({r})               
>         rseed(17) init({sig1} {sig2} 1) dots

Burn-in 2500 aaaaaaaaa1000aaaaaaaaa2000aaaaa carried out
Simulation 10000 .........1000.........2000.........3000.........4000.........
> 5000.........6000.........7000.........8000.........9000.........10000 carried out

Mannequin abstract
------------------------------------------------------------------------------
Probability:
  rgdp ~ regular(,)

Priors:
     {r1:a0 a1} ~ regular(0,100)
  {r2:a0 a1 a2} ~ regular(0,100)
    {sig1 sig2} ~ igamma(0.01,0.01)
            {r} ~ uniform(-0.5,0.5)

Expressions:
  expr1 : cond(L1.rgdp<{r},{r1:a0}+{r1:a1}*L1.rgdp,{r2:a0}+{r2:a1}*
          L1.rgdp+{r2:a2}*L2.rgdp)
  expr2 : cond(L1.rgdp<{r},{sig1},{sig2})
------------------------------------------------------------------------------

Bayesian regular regression                       MCMC iterations  =     12,500
Random-walk Metropolis–Hastings sampling         Burn-in          =      2,500
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =        295
                                                 Acceptance charge  =      .3497
                                                 Effectivity:  min =     .04804
                                                              avg =     .09848
Log marginal-likelihood = -414.93784                          max =      .1997

------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. dev.     MCSE     Median  [95% cred. interval]
-------------+----------------------------------------------------------------
r1           |
          a0 | -1.269802   .7325285   .024046  -1.268012  -2.746194   .2098139
          a1 |  -.858765   .3224316   .009838  -.8566081   -1.48599  -.1966072
-------------+----------------------------------------------------------------
r2           |
          a0 |  .4496805   .0830125   .002535   .4501146   .2868064   .6120813
          a1 |  .2980119   .0562405   .001979   .2955394   .1915367    .412661
          a2 |  .1302317   .0417601   .001504   .1285035   .0465193   .2122086
-------------+----------------------------------------------------------------
           r | -.4831157   .0213086   .000972  -.4905123  -.4996558  -.4155153
        sig1 |  6.747231   2.396798   .087641   6.255731   3.666464    12.7075
        sig2 |  .6563023    .055926   .001251   .6510408   .5575998   .7745131
------------------------------------------------------------------------------

In conclusion, the enlargement state, r2, is characterised by constructive pattern and autocorrelation, comparatively greater persistency, and decrease volatility. The recession state, r1, then again, experiences detrimental pattern and autocorrelation, and better volatility.

Though SETAR(1) gives a way more detailed evaluation than a easy AR(1) mannequin, it nonetheless doesn’t seize all of the adjustments within the dynamics of GDP progress. For instance, the enlargement durations earlier than 1985 appear to have a lot greater volatility than these after 1985. Various regime-switching fashions could have to be thought of to handle this and different features of the time evolution of financial progress.

References
Beaudry, P., and G. Koop. 1993. Do recessions completely change output? Journal of Financial Economics 31: 149–163. https://doi.org/10.1016/0304-3932(93)90042-E.

Cao, C. Q., and R. S. Tsay. 1992. Nonlinear time-series evaluation of inventory volatilities. Journal of Utilized Econometrics 7: S165–S185. https://doi.org/10.1002/jae.3950070512.

Hansen, B. E. 2011. Threshold autoregression in economics. Statistics and Its Inference 4: 123–127. https://doi.org/10.4310/SII.2011.v4.n2.a4.

Tong, H. 1982. Discontinuous determination processes and threshold autoregressive time collection modelling. Biometrica 69: 274–276. https://doi.org/10.2307/2335885.

——. 2011. Threshold fashions in time collection evaluation—30 years on. Statistics and Its Inference 4: 107–118. https://dx.doi.org/10.4310/SII.2011.v4.n2.a1.



Related Articles

Latest Articles