Monday, February 16, 2026

Bayesian binary merchandise response idea fashions utilizing bayesmh


This put up was written collectively with Yulia Marchenko, Government Director of Statistics, StataCorp.

Desk of Contents

Overview
1PL mannequin
2PL mannequin
3PL mannequin
4PL mannequin
5PL mannequin
Conclusion

Overview

Merchandise response idea (IRT) is used for modeling the connection between the latent skills of a gaggle of topics and the examination objects used for measuring their skills. Stata 14 launched a collection of instructions for becoming IRT fashions utilizing most probability; see, for instance, the weblog put up Highlight on irt by Rafal Raciborski and the [IRT] Merchandise Response Idea handbook for extra particulars. On this put up, we exhibit learn how to match Bayesian binary IRT fashions by utilizing the redefine() choice launched for the bayesmh command in Stata 14.1. We additionally use the probability choice dbernoulli() accessible as of the replace on 03 Mar 2016 for becoming Bernoulli distribution. In case you are not acquainted with the ideas and jargon of Bayesian statistics, you might wish to watch the introductory movies on the Stata Youtube channel earlier than continuing.

Introduction to Bayesian evaluation, half 1 : The essential ideas
Introduction to Bayesian evaluation, half 2: MCMC and the Metropolis-Hastings algorithm

We use the abridged model of the arithmetic and science information from DeBoeck and Wilson (2004), masc1. The dataset contains 800 pupil responses to 9 check questions meant to measure mathematical capacity.

The irt suite suits IRT fashions utilizing information within the huge type – one statement per topic with objects recorded in separate variables. To suit IRT fashions utilizing bayesmh, we’d like information within the lengthy type, the place objects are recorded as a number of observations per topic. We thus reshape the dataset in a protracted type: we’ve a single binary response variable, y, and two index variables, merchandise and id, which determine the objects and topics, respectively. This enables us to formulate our IRT fashions as multilevel fashions. The next instructions load and put together the dataset.


. webuse masc1
(Information from De Boeck & Wilson (2004))

. generate id = _n

. quietly reshape lengthy q, i(id) j(merchandise)

. rename q y

To make sure that we embody all ranges of merchandise and id in our fashions, we use fvset base none to maintain the bottom classes.


. fvset base none id merchandise

In what follows, we current eight Bayesian binary IRT fashions growing in complexity and explanatory energy. We carry out Bayesian mannequin comparability to realize perception into what could be the extra acceptable mannequin for the information at hand.

For prime-dimensional fashions corresponding to IRT fashions, you may even see variations within the estimation outcomes between totally different platforms or totally different flavors of Stata due to the character of the Markov chain Monte Carlo (MCMC) sampling and finite numerical precision. These variations should not a supply of concern; they are going to be throughout the vary of the MCMC variability and can result in related inferential conclusions. The variations will diminish because the MCMC pattern dimension will increase. The outcomes on this put up are obtained from Stata/SE on the 64-bit Linux platform utilizing the default 10,000 MCMC pattern dimension.

Let the objects be listed by (i=1,dots,9) and the themes by (j=1,dots,800). Let (theta_j) be the latent mathematical capacity of topic (j), and let (Y_{ij}) be the response of topic (j) to merchandise (i).

Again to desk of contents

1PL mannequin

Within the one-parameter logistic (1PL) mannequin, the chance of getting an accurate response is modeled as an inverse-logit operate of location parameters (b_i), additionally known as merchandise difficulties, and a typical slope parameter (a), additionally known as merchandise discrimination:

[
P(Y_{ij}=1) = {rm InvLogit}{a(theta_j-b_i)} =
frac{exp{a(theta_j-b_i)}}{1+exp{a(theta_j-b_i)}}
]

Usually, the skills are assumed to be usually distributed:
[
theta_j sim {rm N}(0,1)
]
In a multilevel framework, the (theta_j)’s symbolize random results. In a Bayesian framework, we use the time period “random results” to seek advice from the parameters akin to ranges of grouping variables figuring out the hierarchy of the information.

A Bayesian formulation of the 1PL mannequin additionally requires prior specification for the mannequin parameters (a) and (b_i). The discrimination parameter (a) is assumed to be constructive and is commonly modeled within the log scale. As a result of we’ve no prior information concerning the discrimination and issue parameters, we assume that the prior distributions of (ln(a)) and (b_i) have assist on the entire actual line, are symmetric, and are centered at 0. A standard prior distribution is thus a pure alternative. We moreover assume that (ln(a)) and (b_i) are near 0 and have prior variance of 1, which is a wholly subjective resolution. We thus assign (ln(a)) and (b_i) commonplace regular prior distributions:

[ln(a) sim {rm N}(0, 1)] [b_i sim {rm N}(0, 1) ]

To specify the probability operate of the 1PL mannequin in bayesmh, we use a nonlinear equation specification for the response variable y. The direct nonlinear specification for this mannequin is


bayesmh y = ({discrim}*({subj:i.id}-{diff:i.merchandise})), probability(logit) ...

the place {discrim} is the discrimination parameter (a), {subj:i.id} are latent skills (theta_j), and {diff:i.merchandise} are merchandise difficulties (b_i). The logit mannequin is used for the chance of a hit, (P(Y_{ij}=1)). Specification {subj:i.id} within the above nonlinear expression is seen as a substitutable expression for linear mixtures of indicators related to the id variable and parameters (theta_j). This specification could also be computationally prohibitive with numerous topics. A extra environment friendly resolution is to make use of the redefine() choice to incorporate topic random results (theta_j) within the mannequin. The identical argument could apply to the {diff:i.merchandise} specification when there are a lot of objects. Thus, it might be computationally handy to deal with the (b_i) parameters as “random results” within the specification and use the redefine() choice to incorporate them within the mannequin.

A extra environment friendly specification is thus


bayesmh y = ({discrim}*({subj:}-{diff:})), probability(logit) ///
               redefine(subj:i.id) redefine(diff:i.merchandise) ... ///

the place {subj:} and {diff:} within the nonlinear specification now symbolize the (theta_j) and (b_i) parameters, respectively, with out utilizing expansions into linear mixtures of indicator variables.

Beneath, we present the total bayesmh specification of the 1PL mannequin and the output abstract. In our examples, we deal with the skills {subj:i.id} as nuisance parameters and exclude them from the ultimate outcomes. The discrimination mannequin parameter {discrim} should be constructive and is thus initialized with 1. An extended burn-in interval, burnin(5000), permits for longer adaptation of the MCMC sampler, which is required given the big variety of parameters within the mannequin. Lastly, the estimation outcomes are saved for later mannequin comparability.


. set seed 14

. bayesmh y = ({discrim}*({subj:}-{diff:})), probability(logit) ///
>         redefine(diff:i.merchandise) redefine(subj:i.id)            ///
>         prior({subj:i.id},    regular(0, 1))                  ///
>         prior({discrim},      lognormal(0, 1))               ///
>         prior({diff:i.merchandise},  regular(0, 1))                  ///
>         init({discrim} 1) exclude({subj:i.id})               ///
>         burnin(5000) saving(sim1pl, substitute)
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ logit({discrim}*(xb_subj-xb_diff))

Priors: 
  {diff:i.merchandise} ~ regular(0,1)                                              (1)
    {subj:i.id} ~ regular(0,1)                                              (2)
      {discrim} ~ lognormal(0,1)
------------------------------------------------------------------------------
(1) Parameters are components of the linear type xb_diff.
(2) Parameters are components of the linear type xb_subj.

Bayesian logistic regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =      7,200
                                                 Acceptance fee  =      .3074
                                                 Effectivity:  min =     .02691
                                                              avg =     .06168
Log marginal probability =          .                          max =     .09527
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
diff         |
        merchandise |
          1  | -.6934123   .0998543   .003576  -.6934789  -.8909473  -.4917364
          2  | -.1234553   .0917187   .002972  -.1241642  -.3030341   .0597863
          3  | -1.782762   .1323252    .00566  -1.781142   -2.05219  -1.534451
          4  |  .3152835   .0951978   .003289   .3154714   .1279147   .4981263
          5  |  1.622545    .127213   .005561   1.619388   1.377123   1.883083
          6  |  .6815517   .0978777   .003712   .6788345   .4911366    .881128
          7  |  1.303482   .1173994   .005021   1.302328   1.084295   1.544913
          8  | -2.353975   .1620307   .008062  -2.351207  -2.672983  -2.053112
          9  | -1.168668   .1120243   .004526  -1.163922  -1.392936  -.9549209
-------------+----------------------------------------------------------------
     discrim |  .8644787   .0439804   .002681   .8644331   .7818035   .9494433
------------------------------------------------------------------------------

file sim1pl.dta saved

. estimates retailer est1pl

The sampling effectivity is suitable, about 6% on common, with no indication of convergence issues. Though detailed convergence inspection of all parameters is outdoors the scope of this put up, we advocate that you simply achieve this by utilizing, for instance, the bayesgraph diagnostics command.

Although we used informative priors for the mannequin parameters, the estimation outcomes from our Bayesian mannequin should not that totally different from the utmost probability estimates obtained utilizing the irt 1pl command (see instance 1 in [IRT] irt 1pl). For instance, the posterior imply estimate for {discrim} is 0.86 with an MCMC commonplace error of 0.003, whereas irt 1pl experiences 0.85 with an ordinary error of 0.05.

The log-marginal chances are reported lacking as a result of we’ve excluded the {subj:i.id} parameters from the simulation outcomes and the Laplace-Metropolis estimator of the log-marginal probability isn’t accessible in such circumstances. This estimator requires simulation outcomes for all mannequin parameters to compute the log-marginal probability.

Again to desk of contents

2PL mannequin

The 2-parameter logistic (2PL) mannequin extends the 1PL mannequin by permitting for item-specific discrimination. The chance of right response is now modeled as a operate of item-specific slope parameters (a_i):
[
P(Y_{ij}=1) = {rm InvLogit}{a_i(theta_j-b_i)} =
frac{exp{a_i(theta_j-b_i)}}{1+exp{a_i(theta_j-b_i)}}
]

The prior specification for (theta_j) stays the identical as within the 1PL mannequin. We’ll, nonetheless, apply extra elaborate prior specs for the (a_i)’s and (b_i)’s. It’s a good follow to make use of correct prior specs with out overwhelming the proof from the information. The impression of the priors will be managed by introducing further hyperparameters. For instance, Kim and Bolt (2007) proposed using a standard prior for the issue parameters with unknown imply and variance. Extending this strategy to the discrimination parameters as nicely, we apply a hierarchical Bayesian mannequin during which the (ln(a_i)) and (b_i) parameters have the next prior specs:

[ ln(a_i) sim {rm N}(mu_a, sigma_a^2) ] [ b_i sim {rm N}(mu_b, sigma_b^2) ]

The imply hyperparameters, (mu_a) and (mu_b), and variance hyperparameters, (sigma_a^2) and (sigma_b^2), require informative prior specs. We assume that the means are centered at 0 with a variation of 0.1:
[
mu_a, mu_b sim {rm N}(0, 0.1)
]

To decrease the variability of the (ln(a_i)) and (b_i) parameters, we apply an inverse-gamma prior with form 10 and scale 1 for the variance parameters:

[
sigma_a^2, sigma_b^2 sim {rm InvGamma}(10, 1)
]

Thus, the prior imply of (sigma_a^2) and (sigma_b^2) is about 0.1.

Within the bayesmh specification, the hyperparameters (mu_a), (mu_b), (sigma_a^2), and (sigma_a^2) are denoted as {mu_a}, {mu_b}, {var_a}, and {var_b}, respectively. We use the redefine(discrim:i.merchandise) choice to incorporate within the mannequin the discrimination parameters (a_i), known as {discrim:} within the probability specification.

Concerning the MCMC simulation, we alter a number of the default choices. The hyperparameters {mu_a}, {mu_b}, {var_a}, and {var_b} are positioned in separate blocks to enhance the simulation effectivity. The discrimination parameters {discrim:i.merchandise} should be constructive and are thus initialized with 1s.


. set seed 14

. bayesmh y = ({discrim:}*({subj:}-{diff:})), probability(logit) ///
>         redefine(discrim:i.merchandise) redefine(diff:i.merchandise)        ///
>         redefine(subj:i.id)                                   ///
>         prior({subj:i.id},      regular(0, 1))                 ///
>         prior({discrim:i.merchandise}, lognormal({mu_a}, {var_a}))   ///
>         prior({diff:i.merchandise},    regular({mu_b}, {var_b}))      ///
>         prior({mu_a} {mu_b},    regular(0, 0.1))               ///
>         prior({var_a} {var_b},  igamma(10, 1))                ///
>         block({mu_a mu_b var_a var_b}, break up)                 ///
>         init({discrim:i.merchandise} 1)                              ///
>         exclude({subj:i.id}) burnin(5000) saving(sim2pl, substitute)
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ logit(xb_discrim*(xb_subj-xb_diff))

Priors: 
  {discrim:i.merchandise} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.merchandise} ~ regular({mu_b},{var_b})                                (2)
       {subj:i.id} ~ regular(0,1)                                           (3)

Hyperpriors: 
    {mu_a mu_b} ~ regular(0,0.1)
  {var_a var_b} ~ igamma(10,1)
------------------------------------------------------------------------------
(1) Parameters are components of the linear type xb_discrim.
(2) Parameters are components of the linear type xb_diff.
(3) Parameters are components of the linear type xb_subj.

Bayesian logistic regression                     MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =      7,200
                                                 Acceptance fee  =      .3711
                                                 Effectivity:  min =     .01617
                                                              avg =     .04923
Log marginal probability =          .                          max =      .1698
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
discrim      |
        merchandise |
          1  |  1.430976   .1986011   .010953   1.413063   1.089405   1.850241
          2  |  .6954823   .1081209   .004677   .6897267   .4985004   .9276975
          3  |  .9838528   .1343908   .009079   .9780275   .7506566   1.259427
          4  |  .8167792   .1169157   .005601   .8136229   .5992495   1.067578
          5  |  .9402715   .1351977   .010584   .9370298   .6691103   1.214885
          6  |  .9666747   .1420065   .008099   .9616285   .7038868   1.245007
          7  |  .5651287   .0864522   .006201   .5617302   .3956216   .7431265
          8  |  1.354053   .2048404   .015547   1.344227   .9791096   1.761437
          9  |  .7065096   .1060773   .006573   .6999745   .5102749   .9271799
-------------+----------------------------------------------------------------
diff         |
        merchandise |
          1  | -.5070314   .0784172   .003565   -.507922   -.671257  -.3596057
          2  | -.1467198    .117422   .003143  -.1456633  -.3895978   .0716841
          3  | -1.630259   .1900103   .013494  -1.612534  -2.033169  -1.304171
          4  |  .3273735   .1073891   .003565   .3231703   .1248782   .5492114
          5  |  1.529584   .1969554    .01549   1.507982   1.202271   1.993196
          6  |  .6325194    .115724   .005613   .6243691   .4272131   .8851649
          7  |  1.827013   .2884057   .019582    1.79828   1.349654   2.490633
          8  | -1.753744   .1939559   .014743  -1.738199  -2.211475  -1.438146
          9  | -1.384486   .2059005   .012105  -1.361195  -1.838918  -1.059687
-------------+----------------------------------------------------------------
        mu_a | -.1032615   .1148176   .003874   -.102376  -.3347816   .1277031
       var_a |  .1129835   .0356735   .001269   .1056105    .063403   .1981331
        mu_b | -.0696525   .2039387   .004949   -.072602  -.4641566   .3298393
       var_b |  .6216005   .2023137   .008293   .5843444   .3388551   1.101153
------------------------------------------------------------------------------

file sim2pl.dta saved

. estimates retailer est2pl

The typical simulation effectivity is about 5%, however a number of the parameters converge slower than the others, corresponding to {diff:7.merchandise}, which has the biggest MCMC commonplace error (0.02) among the many issue parameters. If this was a rigorous research, to decrease the MCMC commonplace errors, we might advocate longer simulations with MCMC pattern sizes of at the very least 50,000.

We will evaluate the 1PL and 2PL fashions by utilizing the deviance info criterion (DIC) accessible with the bayesstats ic command.


. bayesstats ic est1pl est2pl, diconly

Deviance info criterion

------------------------
             |       DIC
-------------+----------
      est1pl |  8122.428
      est2pl |  8055.005
------------------------

DIC is commonly utilized in Bayesian mannequin choice as a substitute for AIC and BIC standards and will be simply obtained from an MCMC pattern. Bigger MCMC samples produce extra dependable DIC estimates. As a result of totally different MCMC samples produce totally different pattern DIC values and the pattern approximation error in calculating DIC isn’t identified, one mustn’t rely solely on DIC when selecting a mannequin.

Decrease DIC values point out higher match. The DIC of the 2PL mannequin (8,055) is markedly decrease than the DIC of the 1PL mannequin (8,122), implying higher match of the 2PL mannequin.

Again to desk of contents

3PL mannequin

The three-parameter logistic (3PL) mannequin introduces decrease asymptote parameters (c_i), additionally known as guessing parameters. The chance of giving an accurate response is given by

[
P(Y_{ij}=1) = c_i + (1-c_i){rm InvLogit}{a_i(theta_j-b_i)} , c_i > 0
]

The guessing parameters could also be tough to estimate utilizing most probability. Certainly, the irt 3pl command with the sepguessing choice fails to converge, as you’ll be able to confirm by typing


. irt 3pl q1-q9, sepguessing

on the unique dataset.

It’s thus necessary to specify an informative prior for (c_i). We assume that the prior imply of the guessing parameters is about 0.1 and thus apply
[
c_i sim {rm InvGamma}(10, 1)
]

Equally to the discrimination and issue parameters, the (c_i)’s are launched as random-effects parameters within the bayesmh specification and are known as {gues:} within the probability specification.

Not like 1PL and 2PL fashions, we can’t use the probability(logit) choice to mannequin the chance of success as a result of the chance of right response is now not an inverse-logit transformation of the parameters. As a substitute, we use probability(dbernoulli()) to mannequin the chance of success of a Bernoulli end result instantly.

To have a legitimate initialization of the MCMC sampler, we assign the (c_i)’s constructive beginning values, 0.1.


. set seed 14

. bayesmh y, probability(dbernoulli({gues:}+(1-{gues:})*                     ///
>                                  invlogit({discrim:}*({subj:}-{diff:})))) ///
>         redefine(discrim:i.merchandise) redefine(diff:i.merchandise)                    ///
>         redefine(gues:i.merchandise)    redefine(subj:i.id)                      ///
>         prior({subj:i.id},      regular(0, 1))                             ///
>         prior({discrim:i.merchandise}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.merchandise},    regular({mu_b}, {var_b}))                  ///
>         prior({gues:i.merchandise},    igamma(10, 1))                            ///
>         prior({mu_a} {mu_b},    regular(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b}, break up)                             ///
>         init({discrim:i.merchandise} 1 {gues:i.merchandise} 0.1)                        ///
>         exclude({subj:i.id}) burnin(5000) saving(sim3pls, substitute)
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ binomial(xb_gues+(1-xb_gues)*invlogit(xb_discrim*(xb_subj-xb_diff)),1)

Priors: 
  {discrim:i.merchandise} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.merchandise} ~ regular({mu_b},{var_b})                                (2)
     {gues:i.merchandise} ~ igamma(10,1)                                          (3)
       {subj:i.id} ~ regular(0,1)                                           (4)

Hyperpriors: 
    {mu_a mu_b} ~ regular(0,0.1)
  {var_a var_b} ~ igamma(10,1)
------------------------------------------------------------------------------
(1) Parameters are components of the linear type xb_discrim.
(2) Parameters are components of the linear type xb_diff.
(3) Parameters are components of the linear type xb_gues.
(4) Parameters are components of the linear type xb_subj.

Bayesian Bernoulli mannequin                         MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =      7,200
                                                 Acceptance fee  =      .3496
                                                 Effectivity:  min =      .0148
                                                              avg =     .03748
Log marginal probability =          .                          max =      .2044
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
discrim      |
        merchandise |
          1  |  1.712831   .2839419   .018436   1.681216   1.232644   2.351383
          2  |  .8540871   .1499645   .008265   .8414399   .6058463   1.165732
          3  |  1.094723   .1637954    .01126   1.081756    .817031   1.454845
          4  |  1.090891   .2149095   .013977   1.064651   .7488589   1.588164
          5  |  1.363236   .2525573   .014858   1.338075   .9348136   1.954695
          6  |  1.388325   .3027436   .024245   1.336303   .9466695   2.068181
          7  |  .9288217   .2678741   .021626   .8750048   .5690308   1.603375
          8  |  1.457763   .2201065    .01809   1.438027   1.068937   1.940431
          9  |  .7873631    .127779   .007447   .7796568    .563821    1.06523
-------------+----------------------------------------------------------------
diff         |
        merchandise |
          1  | -.2933734   .0976177   .006339  -.2940499  -.4879558  -.0946848
          2  |  .2140365    .157158   .008333   .2037788  -.0553537   .5550411
          3  | -1.326351   .1981196   .013101  -1.326817  -1.706671  -.9307443
          4  |  .6367877   .1486799   .007895   .6277349   .3791045   .9509913
          5  |  1.616056   .1799378    .00966   1.606213   1.303614   2.006817
          6  |  .8354059    .124184    .00656   .8191839    .614221   1.097801
          7  |  2.066205   .3010858   .018377   2.034757   1.554484   2.709601
          8  | -1.555583   .1671435   .012265   -1.54984   -1.89487  -1.267001
          9  | -.9775626   .2477279   .016722  -.9936727  -1.431964  -.4093629
-------------+----------------------------------------------------------------
gues         |
        merchandise |
          1  |  .1078598   .0337844     .0019   .1020673   .0581353   .1929404
          2  |  .1128113   .0372217   .002162   .1065996   .0596554   .2082417
          3  |   .123031   .0480042   .002579   .1127147   .0605462   .2516237
          4  |  .1190103   .0390721   .002369   .1123544   .0617698   .2095427
          5  |  .0829503   .0185785   .001275   .0807116   .0514752   .1232547
          6  |  .1059315   .0289175   .001708   .1022741   .0584959   .1709483
          7  |  .1235553   .0382661   .002964   .1186648   .0626495   .2067556
          8  |  .1142118   .0408348   .001733   .1062507   .0592389   .2134006
          9  |  .1270767   .0557821   .003939    .113562   .0621876   .2825752
-------------+----------------------------------------------------------------
        mu_a |   .109161   .1218499   .005504   .1126253   -.135329   .3501061
       var_a |   .108864   .0331522   .001053   .1030106   .0604834   .1860996
        mu_b |  .0782094   .1974657   .004367   .0755023  -.3067717   .4638104
       var_b |  .5829738   .1803167   .006263   .5562159   .3260449   1.034225
------------------------------------------------------------------------------

file sim3pls.dta saved

. estimates retailer est3pls

The estimated posterior technique of the (c_i)’s vary between 0.08 and 0.13. Clearly, the introduction of guessing parameters has an impression on the merchandise discrimination and issue parameters. For instance, the estimated posterior technique of (mu_a) and (mu_b) shift from -0.10 and -0.07, respectively, for the 2PL mannequin to 0.11 and 0.08, respectively, for the 3PL mannequin.

As a result of the estimated guessing parameters should not that totally different, one could ask whether or not item-specific guessing parameters are actually obligatory. To reply this query, we match a mannequin with a typical guessing parameter, {gues}, and evaluate it with the earlier mannequin.


. set seed 14

. bayesmh y, probability(dbernoulli({gues}+(1-{gues})*                       ///
>                                  invlogit({discrim:}*({subj:}-{diff:})))) ///
>         redefine(discrim:i.merchandise) redefine(diff:i.merchandise)                    ///
>         redefine(subj:i.id)                                               ///
>         prior({subj:i.id},      regular(0, 1))                             ///
>         prior({discrim:i.merchandise}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.merchandise},    regular({mu_b}, {var_b}))                  ///
>         prior({gues},           igamma(10, 1))                            ///
>         prior({mu_a} {mu_b},    regular(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b gues}, break up)                        ///
>         init({discrim:i.merchandise} 1 {gues} 0.1)                               ///
>         exclude({subj:i.id}) burnin(5000) saving(sim3pl, substitute)
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ binomial({gues}+(1-{gues})*invlogit(xb_discrim*(xb_subj-xb_diff)),1)

Priors: 
  {discrim:i.merchandise} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.merchandise} ~ regular({mu_b},{var_b})                                (2)
       {subj:i.id} ~ regular(0,1)                                           (3)
            {gues} ~ igamma(10,1)

Hyperpriors: 
    {mu_a mu_b} ~ regular(0,0.1)
  {var_a var_b} ~ igamma(10,1)
------------------------------------------------------------------------------
(1) Parameters are components of the linear type xb_discrim.
(2) Parameters are components of the linear type xb_diff.
(3) Parameters are components of the linear type xb_subj.

Bayesian Bernoulli mannequin                         MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =      7,200
                                                 Acceptance fee  =      .3753
                                                 Effectivity:  min =     .01295
                                                              avg =     .03714
Log marginal probability =          .                          max =      .1874
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
discrim      |
        merchandise |
          1  |  1.692894   .2748163   .021944   1.664569   1.232347   2.299125
          2  |  .8313512   .1355267    .00606   .8218212   .5928602   1.125729
          3  |  1.058833   .1611742   .014163   1.054126   .7676045   1.393611
          4  |  1.041808   .1718472   .008782   1.029867   .7398569   1.397073
          5  |  1.534997   .3208687   .023965   1.497019   1.019998   2.266078
          6  |   1.38296   .2581948   .019265   1.355706   .9559487   1.979358
          7  |  .8310222   .1698206   .012896   .8107371   .5736484   1.248736
          8  |  1.442949   .2266268   .017562   1.431204   1.066646   1.930829
          9  |    .77944   .1159669   .007266   .7750891   .5657258   1.014941
-------------+----------------------------------------------------------------
diff         |
        merchandise |
          1  | -.3043161   .0859905   .005373  -.2968324  -.4870583  -.1407109
          2  |  .1814508   .1289251   .006543   .1832146  -.0723988   .4313265
          3  | -1.391216   .1924384   .014986  -1.373093  -1.809343  -1.050919
          4  |  .5928491   .1262631   .006721   .5829347    .356614    .857743
          5  |  1.617348   .1929263   .011604   1.601534   1.293032   2.061096
          6  |   .817635   .1172884   .006125    .812838   .5990503   1.064322
          7  |  2.006949   .2743517    .01785   1.981052   1.556682   2.594236
          8  | -1.576235   .1747855   .013455  -1.559435  -1.952676  -1.272108
          9  | -1.039362   .1840773    .01138   -1.02785  -1.432058  -.7160181
-------------+----------------------------------------------------------------
        gues |  .1027336   .0214544   .001753   .1022211   .0627299   .1466367
        mu_a |  .1009741    .123915   .006567   .0965353  -.1343028   .3510697
       var_a |  .1121003   .0344401   .001154   .1059563   .0628117   .1970842
        mu_b |  .0632173   .1979426   .004572   .0666684  -.3292497   .4482957
       var_b |  .5861236   .1818885   .006991   .5574743   .3239369   1.053172
------------------------------------------------------------------------------

file sim3pl.dta saved

. estimates retailer est3pl

We will once more evaluate the 2 3PL fashions by utilizing the bayesstats ic command:


. bayesstats ic est3pls est3pl, diconly

Deviance info criterion

------------------------
             |       DIC
-------------+----------
     est3pls |  8049.425
      est3pl |  8049.426
------------------------

Though the estimated DICs of the 2 3PL fashions are primarily the identical, we resolve for demonstration functions to proceed with the mannequin with item-specific guessing parameters.

Again to desk of contents

4PL mannequin

The four-parameter logistic (4PL) mannequin extends the 3PL mannequin by including item-specific higher asymptote parameters (d_i):
[
P(Y_{ij}=1) = c_i + (d_i-c_i){rm InvLogit}{a_i(theta_j-b_i)}
, c_i < d_i < 1
]
The (d_i) parameter will be seen as an higher restrict on the chance of right response to the (i)th merchandise. The chance of giving right solutions by topics with very excessive capacity can thus be no better than (d_i).

We prohibit the (d_i)’s to the (0.8,1) vary and assign them a ({rm Uniform}(0.8,1)) prior. For different parameters, we use the identical priors as within the 3PL mannequin.

Within the bayesmh specification of the mannequin, the situation (c_i < d_i) is integrated within the probability, and the situation (d_i < 1) is implied by the required prior for the (d_i)’s. We initialize the (d_i)’s to 0.9. We use the notable choice to suppress the lengthy desk output.


. set seed 14

. bayesmh y, probability(dbernoulli(({gues:}+({d:}-{gues:})*                 ///
>                                  invlogit({discrim:}*({subj:}-{diff:})))* ///
>                                  cond({gues:}<{d:},1,.)))                 ///
>         redefine(discrim:i.merchandise) redefine(diff:i.merchandise)                    ///
>         redefine(gues:i.merchandise)    redefine(d:i.merchandise)  redefine(subj:i.id)  ///
>         prior({subj:i.id},      regular(0, 1))                             ///
>         prior({discrim:i.merchandise}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.merchandise},    regular({mu_b}, {var_b}))                  ///
>         prior({gues:i.merchandise},    igamma(10, 1))                            ///
>         prior({d:i.merchandise},       uniform(0.8, 1))                          ///
>         prior({mu_a} {mu_b},    regular(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b}, break up)                             ///
>         init({discrim:i.merchandise} 1 {gues:i.merchandise} 0.1 {d:i.merchandise} 0.9)         ///
>         exclude({subj:i.id}) burnin(5000) saving(sim4pls, substitute) notable
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ binomial(,1)

Priors: 
  {discrim:i.merchandise} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.merchandise} ~ regular({mu_b},{var_b})                                (2)
     {gues:i.merchandise} ~ igamma(10,1)                                          (3)
        {d:i.merchandise} ~ uniform(0.8,1)                                        (4)
       {subj:i.id} ~ regular(0,1)                                           (5)

Hyperpriors: 
    {mu_a mu_b} ~ regular(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+(xb_d-xb_gues)*invlogit(xb_discrim*(xb_subj-xb_diff)))* con
          d(xb_gues

We use bayesstats abstract to show outcomes of chosen mannequin parameters.


. bayesstats abstract {d:i.merchandise} {mu_a var_a mu_b var_b}

Posterior abstract statistics                      MCMC pattern dimension =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
d            |
        merchandise |
          1  |  .9598183   .0255321   .001948   .9621874   .9044441   .9981723
          2  |  .9024564   .0565702   .007407   .9019505   .8066354   .9944216
          3  |  .9525519   .0281878   .002845   .9551054   .8972454   .9971564
          4  |  .8887963   .0561697   .005793   .8859503   .8036236   .9916784
          5  |  .8815547   .0588907   .007215   .8708021   .8031737   .9926549
          6  |  .8891188   .0586482   .006891    .881882   .8024593   .9935512
          7  |   .874271   .0561718   .008087   .8635082   .8018176   .9880433
          8  |  .9663644   .0147606   .001121   .9667563   .9370666   .9950912
          9  |   .889164   .0486038   .005524   .8834207   .8084921   .9857415
-------------+----------------------------------------------------------------
        mu_a |  .3336887   .1436216   .009742    .334092   .0562924   .6164115
       var_a |  .1221547   .0406908   .002376   .1144729   .0642768   .2229326
        mu_b | -.0407488   .1958039   .005645  -.0398847  -.4220523   .3323791
       var_b |  .4991736   .1612246    .00629   .4660071   .2802531   .9023824
------------------------------------------------------------------------------

The bayesmh command issued a notice indicating excessive autocorrelation for a number of the mannequin parameters. This can be associated to slower MCMC convergence or extra substantial issues within the mannequin specification. It's thus worthwhile to examine the person autocorrelation of the parameters. We will achieve this by utilizing the bayesstats ess command. The parameters with decrease estimated pattern dimension (ESS) have larger autocorrelation and vice versa.


. bayesstats ess {d:i.merchandise} {mu_a var_a mu_b var_b}

Effectivity summaries    MCMC pattern dimension =    10,000
 
----------------------------------------------------
             |        ESS   Corr. time    Effectivity
-------------+--------------------------------------
d            |
        merchandise |
          1  |     171.82        58.20        0.0172
          2  |      58.33       171.43        0.0058
          3  |      98.17       101.87        0.0098
          4  |      94.02       106.36        0.0094
          5  |      66.62       150.11        0.0067
          6  |      72.44       138.05        0.0072
          7  |      48.25       207.26        0.0048
          8  |     173.30        57.70        0.0173
          9  |      77.41       129.19        0.0077
-------------+--------------------------------------
        mu_a |     217.35        46.01        0.0217
       var_a |     293.34        34.09        0.0293
        mu_b |    1203.20         8.31        0.1203
       var_b |     656.92        15.22        0.0657
----------------------------------------------------

We observe that the parameters with ESS decrease than 200 are among the many asymptote parameter’s (d_i)’s. This can be induced, for instance, by overparameterization of the probability mannequin and subsequent nonidentifiability, which isn't resolved by the required priors.

We will additionally match a mannequin with a typical higher asymptote parameter, (d), and evaluate it with the mannequin with the item-specific higher asymptote.


. set seed 14

. bayesmh y, probability(dbernoulli(({gues:}+({d}-{gues:})*                  ///
>                                  invlogit({discrim:}*({subj:}-{diff:})))* ///
>                                  cond({gues:}<{d},1,.)))                  ///
>         redefine(discrim:i.merchandise) redefine(diff:i.merchandise)                    ///
>         redefine(gues:i.merchandise)    redefine(subj:i.id)                      ///
>         prior({subj:i.id},      regular(0, 1))                             ///
>         prior({discrim:i.merchandise}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.merchandise},    regular({mu_b}, {var_b}))                  ///
>         prior({gues:i.merchandise},    igamma(10, 1))                            ///
>         prior({d},              uniform(0.8, 1))                          ///
>         prior({mu_a} {mu_b},    regular(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b d}, break up)                           ///
>         init({discrim:i.merchandise} 1 {gues:i.merchandise} 0.1 {d} 0.9)                ///
>         exclude({subj:i.id}) burnin(5000) saving(sim4pl, substitute) notable
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ binomial(>,1)

Priors: 
  {discrim:i.merchandise} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.merchandise} ~ regular({mu_b},{var_b})                                (2)
     {gues:i.merchandise} ~ igamma(10,1)                                          (3)
       {subj:i.id} ~ regular(0,1)                                           (4)
               {d} ~ uniform(0.8,1)

Hyperpriors: 
    {mu_a mu_b} ~ regular(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+({d}-xb_gues)*invlogit(xb_discrim*(xb_subj-xb_diff)))* cond
          (xb_gues<{d},1,.)
------------------------------------------------------------------------------
(1) Parameters are components of the linear type xb_discrim.
(2) Parameters are components of the linear type xb_diff.
(3) Parameters are components of the linear type xb_gues.
(4) Parameters are components of the linear type xb_subj.

Bayesian Bernoulli mannequin                         MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =      7,200
                                                 Acceptance fee  =      .3877
                                                 Effectivity:  min =      .0107
                                                              avg =     .03047
Log marginal probability =          .                          max =      .1626

file sim4pl.dta saved

. estimates retailer est4pl

. bayesstats abstract {d mu_a var_a mu_b var_b}

Posterior abstract statistics                      MCMC pattern dimension =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
           d |  .9664578   .0144952   .001293   .9668207   .9371181   .9924572
        mu_a |  .2206696   .1387873    .01113   .2208302  -.0483587   .4952625
       var_a |  .1245785   .0391551   .001806   .1188779   .0658243   .2187058
        mu_b |  .0371722   .2020157    .00501   .0331742  -.3481366   .4336587
       var_b |  .5603447   .1761812   .006817   .5279243   .3157048   .9805077
------------------------------------------------------------------------------

We now evaluate the 2 4PL fashions by utilizing the bayesstats ic command:


. bayesstats ic est4pls est4pl, diconly

Deviance info criterion

------------------------
             |       DIC
-------------+----------
     est4pls |  8050.805
      est4pl |  8037.075
------------------------

The DIC of the extra complicated 4PL mannequin (8,051) is considerably larger than the DIC of the less complicated mannequin (8,037). This and the potential nonidentifiability of the extra complicated est4pls mannequin, indicated by excessive autocorrelation within the simulated MCMC pattern, compel us to proceed with the mannequin with a typical higher asymptote, est4pl.

The posterior distribution of (d) has an estimated 95% equal-tailed credible interval of (0.93, 0.99) and is concentrated about 0.97. The ({rm Uniform}(0.8,1)) prior on (d) doesn't appear to be too restrictive. The estimated DIC of the est4pl mannequin (8,037) is decrease than the DIC of the est3pls 3PL mannequin from the earlier part (8,049), implying that the introduction of the higher asymptote parameter (d) does enhance the mannequin match.

Again to desk of contents

5PL mannequin

The five-parameter logistic (5PL) mannequin extends the 4PL mannequin by including item-specific asymmetry parameters (e_i):
[
P(Y_{ij}=1) = c_i + (d_i-c_i){rm InvLogit}big[{{a_i(theta_j-b_i)}}^{e_i}big]
, c_i < d_i < 1, 0 < e_i < 1
]

Within the earlier part, we discovered the 4PL mannequin with frequent higher asymptote (d), est4pl, to be one of the best one thus far. We thus take into account right here a 5PL mannequin with frequent higher asymptote (d).

Usually, we anticipate the (e_i) parameters to be near 1. Equally to the higher asymptote parameter (d), the (e_i) parameters are assumed to be within the (0.8,1) vary and are assigned ({rm Uniform}(0.8,1)) prior. We initialize the (e_i)s to 0.9. We once more use the notable choice to suppress the lengthy desk output, and we show a subset of outcomes by utilizing bayesstats abstract. (We may have used bayesmh's noshow() choice as an alternative to attain the identical consequence.)


. set seed 14

. bayesmh y, probability(dbernoulli(({gues:}+({d}-{gues:})*                  ///
>                           (invlogit({discrim:}*({subj:}-{diff:})))^{e:})* ///
>                           cond({gues:}<{d},1,.)))                         ///
>         redefine(discrim:i.merchandise) redefine(diff:i.merchandise)                    ///
>         redefine(gues:i.merchandise)    redefine(e:i.merchandise)  redefine(subj:i.id)  ///
>         prior({subj:i.id},      regular(0, 1))                             ///
>         prior({discrim:i.merchandise}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.merchandise},    regular({mu_b}, {var_b}))                  ///
>         prior({gues:i.merchandise},    igamma(10, 1))                            ///
>         prior({d},              uniform(0.8, 1))                          ///
>         prior({e:i.merchandise},       uniform(0.8, 1))                          ///
>         prior({mu_a} {mu_b},    regular(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b d}, break up)                           ///
>         init({discrim:i.merchandise} 1 {gues:i.merchandise} 0.1 {d} {e:i.merchandise} 0.9)     ///
>         exclude({subj:i.id}) burnin(5000) saving(sim5pls, substitute) notable
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ binomial(,1)

Priors: 
  {discrim:i.merchandise} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.merchandise} ~ regular({mu_b},{var_b})                                (2)
     {gues:i.merchandise} ~ igamma(10,1)                                          (3)
        {e:i.merchandise} ~ uniform(0.8,1)                                        (4)
       {subj:i.id} ~ regular(0,1)                                           (5)
               {d} ~ uniform(0.8,1)

Hyperpriors: 
    {mu_a mu_b} ~ regular(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+({d}-xb_gues)*(invlogit(xb_discrim*(xb_subj-xb_diff)))^xb_e
          )* cond(xb_gues<{d},1,.)
------------------------------------------------------------------------------
(1) Parameters are components of the linear type xb_discrim.
(2) Parameters are components of the linear type xb_diff.
(3) Parameters are components of the linear type xb_gues.
(4) Parameters are components of the linear type xb_e.
(5) Parameters are components of the linear type xb_subj.

Bayesian Bernoulli mannequin                         MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =      7,200
                                                 Acceptance fee  =      .3708
                                                 Effectivity:  min =    .007341
                                                              avg =     .02526
Log marginal probability =          .                          max =      .1517

file sim5pls.dta saved

. estimates retailer est5pls

. bayesstats abstract {e:i.merchandise} {d mu_a var_a mu_b var_b}

Posterior abstract statistics                      MCMC pattern dimension =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
e            |
        merchandise |
          1  |   .897859   .0578428   .006083   .8939272   .8050315   .9957951
          2  |  .9042669   .0585023   .005822     .90525   .8053789   .9956565
          3  |    .88993   .0562398   .005013    .887011    .803389   .9930454
          4  |  .9010241   .0574186   .006492   .9042044   .8030981   .9925598
          5  |  .9126369   .0545625    .00521   .9178927   .8098596   .9964487
          6  |  .9037269   .0583833   .006814   .9086704   .8054932   .9961268
          7  |  .9136308   .0558911   .005373   .9203899   .8112029    .996217
          8  |   .889775   .0568656   .005119   .8849938    .803912   .9938777
          9  |  .8808435    .056257   .004743   .8727194   .8030522   .9904972
-------------+----------------------------------------------------------------
           d |  .9671374   .0144004   .001165   .9670598   .9382404   .9933374
        mu_a |  .2770211   .1353777    .00832   .2782552   .0141125   .5418087
       var_a |   .122635   .0404159   .002148   .1160322   .0666951   .2208711
        mu_b |  .1211885   .1929743   .004955   .1199136  -.2515431    .503733
       var_b |  .5407642   .1747674   .006353   .5088269   .3016315   .9590086
------------------------------------------------------------------------------

We additionally wish to evaluate the above mannequin with a less complicated one utilizing a typical asymmetry parameter (e).


. set seed 14

. bayesmh y, probability(dbernoulli(({gues:}+({d}-{gues:})*                  ///
>                            (invlogit({discrim:}*({subj:}-{diff:})))^{e})* ///
>                            cond({gues:}<{d},1,.)))                        ///
>         redefine(discrim:i.merchandise) redefine(diff:i.merchandise)                    ///
>         redefine(gues:i.merchandise)    redefine(subj:i.id)                      ///
>         prior({subj:i.id},      regular(0, 1))                             ///
>         prior({discrim:i.merchandise}, lognormal({mu_a}, {var_a}))               ///
>         prior({diff:i.merchandise},    regular({mu_b}, {var_b}))                  ///
>         prior({gues:i.merchandise},    igamma(10, 1))                            ///
>         prior({d} {e},          uniform(0.8, 1))                          ///
>         prior({mu_a} {mu_b},    regular(0, 0.1))                           ///
>         prior({var_a} {var_b},  igamma(10, 1))                            ///
>         block({mu_a mu_b var_a var_b d e}, break up)                         ///
>         init({discrim:i.merchandise} 1 {gues:i.merchandise} 0.1 {d e} 0.9)              ///
>         exclude({subj:i.id}) burnin(5000) saving(sim5pl, substitute) notable
  
Burn-in ...
Simulation ...

Mannequin abstract
------------------------------------------------------------------------------
Probability: 
  y ~ binomial(,1)

Priors: 
  {discrim:i.merchandise} ~ lognormal({mu_a},{var_a})                             (1)
     {diff:i.merchandise} ~ regular({mu_b},{var_b})                                (2)
     {gues:i.merchandise} ~ igamma(10,1)                                          (3)
       {subj:i.id} ~ regular(0,1)                                           (4)
             {d e} ~ uniform(0.8,1)

Hyperpriors: 
    {mu_a mu_b} ~ regular(0,0.1)
  {var_a var_b} ~ igamma(10,1)

Expression: 
  expr1 : (xb_gues+({d}-xb_gues)*(invlogit(xb_discrim*(xb_subj-xb_diff)))^{e})
          * cond(xb_gues<{d},1,.)
------------------------------------------------------------------------------
(1) Parameters are components of the linear type xb_discrim.
(2) Parameters are components of the linear type xb_diff.
(3) Parameters are components of the linear type xb_gues.
(4) Parameters are components of the linear type xb_subj.

Bayesian Bernoulli mannequin                         MCMC iterations  =     15,000
Random-walk Metropolis-Hastings sampling         Burn-in          =      5,000
                                                 MCMC pattern dimension =     10,000
                                                 Variety of obs    =      7,200
                                                 Acceptance fee  =      .3805
                                                 Effectivity:  min =    .008179
                                                              avg =     .02768
Log marginal probability =          .                          max =     .08904

file sim5pl.dta saved

. estimates retailer est5pl

. bayesstats abstract {e d mu_a var_a mu_b var_b}

Posterior abstract statistics                      MCMC pattern dimension =    10,000
 
------------------------------------------------------------------------------
             |                                                Equal-tailed
             |      Imply   Std. Dev.     MCSE     Median  [95% Cred. Interval]
-------------+----------------------------------------------------------------
           e |  .9118363   .0558178   .004194   .9175841   .8063153   .9960286
           d |  .9655166   .0147373   .001495   .9659029   .9354708   .9924492
        mu_a |  .2674271   .1368926   .008485    .270597   .0102798   .5443345
       var_a |  .1250759   .0428095   .002635   .1173619   .0654135   .2340525
        mu_b |  .1015121   .2048178   .006864    .103268  -.3052377   .4934158
       var_b |  .5677309   .1824591   .006981   .5331636   .3079868   1.016762
------------------------------------------------------------------------------

We use bayesstats ic to check the DIC values of the 2 5PL fashions:


. bayesstats ic est5pls est5pl, diconly

Deviance info criterion

------------------------
             |       DIC
-------------+----------
     est5pls |  8030.894
      est5pl |  8034.517
------------------------

The estimated DIC of the extra complicated est5pls mannequin (8,031) is decrease than the DIC of the less complicated mannequin (8,035), suggesting a greater match.

Again to desk of contents

Conclusion

Lastly, we evaluate all eight fitted fashions.


. bayesstats ic est1pl est2pl est3pl est3pls est4pl est4pls est5pl est5pls, ///
>         diconly

Deviance info criterion

------------------------
             |       DIC
-------------+----------
      est1pl |  8122.428
      est2pl |  8055.005
      est3pl |  8049.426
     est3pls |  8049.425
      est4pl |  8037.075
     est4pls |  8050.805
      est5pl |  8034.517
     est5pls |  8030.894
------------------------

The est5pls mannequin has the bottom general DIC. To verify this consequence, we run one other set of simulations with a bigger MCMC pattern dimension of fifty,000. (We merely added the mcmcsize(50000) choice to the bayesmh specification of the above eight fashions.) The next DIC values, primarily based on the bigger MCMC pattern dimension, are extra reliably estimated.


. bayesstats ic est1pl est2pl est3pl est3pls est4pl est4pls est5pl est5pls, ///
>         diconly

Deviance info criterion

------------------------
             |       DIC
-------------+----------
      est1pl |  8124.015
      est2pl |  8052.068
      est3pl |  8047.067
     est3pls |  8047.738
      est4pl |  8032.417
     est4pls |  8049.712
      est5pl |  8031.375
     est5pls |  8031.905
------------------------

Once more, the 5PL fashions have the bottom DIC values and appear to offer one of the best match. Nonetheless, the DIC variations between fashions est4pl, est5pl, and est5pls are minimal and should very nicely be throughout the estimation error. Regardless, these three fashions seem like higher than the less complicated 1PL, 2PL, and 3PL fashions.

Extra mannequin checking could also be wanted to evaluate the fashions' match, and we should always not rely solely on the DIC values to make our ultimate mannequin choice. A practitioner should favor the less complicated est4pl 4PL mannequin to the 5PL fashions although it has a barely larger DIC. The truth is, provided that the posterior imply estimate of the higher asymptote parameter (d) is 0.96 with a 95% equal-tailed credible interval of (0.94, 0.99), some practitioners could favor the even less complicated est3pl 3PL mannequin.

References

De Boeck, P., and M. Wilson, ed. 2004. Explanatory Merchandise Response Fashions: A Generalized Linear and Nonlinear Strategy. New York: Springer.

Kim, J.-S., and D. M. Bolt. 2007. Estimating merchandise response idea fashions utilizing Markov chain Monte Carlo strategies. Instructional Measurement: Points and Apply 26: 38-51.




Related Articles

Latest Articles