Advances in Computer Sciences

ISSN 2517-5718

On Modeling the Double and Multiplicative Binomial Models as Log-Linear Models

Bayo H Lawal*

Department of Statistics & Mathematical Sciences, Kwara State University, Malete, Kwara State, Nigeria

Corresponding author

Bayo H Lawal
Department of Statistics & Mathematical Sciences
Kwara State University
Malete, Kwara State

  • Received Date:20 October 2017
  • Accepted Date:20 November 2017
  • Published Date:12 January 2018

DOI:   10.31021/acs.20181103

Article Type:   Research Article

Manuscript ID:   ACS-1-103

Publisher:   Boffin Access Limited.

Volume:   1.1

Journal Type:   Open Access

Copyright:   © 2018 Lawal BH.
Creative Commons Attribution 4.0


Lawal BH (2018) On Modeling the Double and Multiplicative Binomial Models as Log-Linear Models. Adv Comput Sci. 2018 Jan;1(1):103.


In this paper we have fitted the double binomial and multiplicative binomial distributions as log-linear models using sufficient statistics. This approach is not new as several authors have employed this approach, most especially in the analysis of the Human sex ratio in [1]. However, obtaining the estimated parameters of the distributions may be problematic, especially for the double binomial where the parameter estimate of π may not be readily available from the Log-Linear (LL) parameter estimates. Other issues associated with the LL approach is its implementation in the generalized linear model with covariates. The LL uses far more parameters than the procedure that employs conditional log-likelihoods functions where the marginal likelihood functions are minimized over the parameter space. This is the procedure employed in SAS PROC NLMIXED. The two procedures are essentially equivalent for frequency data. For models with covariates, the LL uses far more parameters and the marginal likelihood functions approach are employed here on three data set having covariates.


Double Binomial; Multiplicative Binomial; Log-Linear; Marginal Likelihood Functions.


In the formulations of the multiplicative binomial distribution, in Altham and its corresponding double binomial distribution in Efron, both distributions were characterized with intractable normalizing constants c(n, ψ) and c(n, π) respectively [2,3]. Consequently, these models were implemented in by utilizing a generalized linear model with a Poisson distribution and log link to the frequency data. This approach has earlier being similarly employed in [1,4]. This approach which employs joint sufficient statistics in both distributions was earlier proposed in Lindley & Mersch [5].

Both distributions are fitted using a Poisson regression model having sufficient statistics from both distributions as explanatory variables with the frequencies being the mean dependent variables. For the Double binomial model (DBM), the sufficient statistics are y log(y) and (n − y) log(n − y). Similar joint sufficient statistics for the multiplicative binomial model (MBM) are y and y(n − y) with the offset being

$$Z = \log \left( \matrix{ n \hfill \cr y \hfill \cr} \right)$$for both models. For instance, for the DBM the model would be: $$\log \left( {{n_i}/z} \right) = y + \theta $$ Where $$\theta = {\theta _1} + {\theta _2}$$ $${\theta _1} = \left\{ {\matrix{ 0 & {ify = 0} \cr {y\log (y)} & {otherwise} \cr } } \right.$$ , $${\theta _2} = \left\{ {\matrix{ 0 & {ify = n} \cr {n - y\log (n - y)} & {otherwise} \cr } } \right.$$ Similarly, for the multiplicative binomial, the model is estimated by the log-linear model (or Poisson Model): $$\log \left( {{{{n_i}} \over z}} \right) = y + \delta .........................(2)$$ Where $$\delta = {\delta _1} + {\delta _2}$$ , and
$${\delta _1} = \left\{ {\matrix{ 0 & {ify = 0} \cr y & {otherwise} \cr } } \right.$$ ,$${\delta _2} = \left\{ {\matrix{ 0 & {ify = n} \cr {y(n - y)} & {otherwise} \cr } } \right.$$ However, in recent times, both models have been fully formulated with the intractable normalizing constants fully formulated. These distributions are described later in this paper with the normalizing constants fully formulated. In this study, we compare fitting these two probability models to two example frequency data, two data set examples arising from teratology studies, and randomized complete block design example having binary outcome by the method of sufficient statistics described above and by the method of numerically maximizing the marginal likelihood function arising from engaging the problem as a mixed generalized linear model. The SAS PROC NLMIXED which performs the Maximum Likelihood estimation numerically by using the Adaptive Gaussian Quadrature and Newton-Raphson optimization algorithm. We shall designated the sufficient Statistics-Poisson regression approach as LL, while the marginal likelihood function maximization via PROCNLMIXED is designated MgL in this study. The sufficient statistics procedure uses a Poisson regression with an offset and is implemented in SAS PROC GENMOD.

Models Under Consideration

We describe in the following section, the two probability distribution models employed in this paper

The Multiplicative Binomial Model-MBM

[2,6,7] Lovinson proposed an alternative form of the twoparameter exponential family generalization of the binomial distribution first introduced by [2] which itself was based on the original Cox representation as:

$$f(y) = {{\left( {\matrix{ n \cr y \cr } } \right)\psi _{}^y(1 - \psi )_{}^{n - y}\omega _{}^{y(n - y)}} \over {\sum\limits_{j = 0}^n {\left( {\matrix{ n \cr j \cr } } \right)\psi _{}^j(1 - \psi )_{}^{n - j}\omega _{}^{j(n - j)}} }},y = 0,1,........,n......................................(3)$$

where 0 < ψ < 1 and > 0. When = 1 the distribution reduces to the binomial with = ψ . If = 1, n → , and ψ→ 0, then nψ →μ and the MBD reduces to Poisson(μ).

The normalizing constant is $$c(n,\psi ) = \sum\limits_{j = 0}^n {\left( {\matrix{ n \cr j \cr } } \right)\psi _{}^j(1 - \psi )_{}^{n - j}\omega _{}^{j(n - j)}} $$

the denominator expression in (3) in this case. [8] presented an elegant characteristics of the multiplicative binomial distribution including its four central moments. His treatment includes generation of random data from the distribution as well as the likelihood profiles and several examples-some of which are similarly employed in this presentation. Following [8] the probability π of success for the Bernoulli trial, that is, P(Y = 1) can be computed from the following expression in (4) as:

$${p_i} = {\psi _i}{{{K_{n - i}}(\psi ,\omega )} \over {{K_n}(\psi ,\omega )}},fori = 1...........................(4)$$
Where:$${k_{n - a}}(\psi ,\omega ) = \sum\limits_{y = 0}^{n - a} {\left( {\matrix{ {n - a} \cr y \cr } } \right)\psi _{}^y(1 - \psi )_{}^{n - a - y}\omega _{}^{(y + a)(n - a - y)}} ;a = 1,2,.......,.............(5)$$

with p defined as in (4), Ψ therefore can be defined as the probability of success weighted by the intra-units association measure ω which measures the dependence among the binary responses of the n units. Thus if ω = 1, then p = Ψ and we have independence among the units. However, if ω = 1, then, p = Ψ and the units are not independent

The mean and variance of the LMPD are given respectively as:

E(Y)= np1      (6a)
Var (Y)= np1+ n(n-1) p2- np12    (6b)

The Double Binomial (DBM) Model

In Feirer et al. [7], the double binomial distribution was presented, having the pdf form:
$$f(y;\pi ,\phi ) = {{\left( {\matrix{ n \cr y \cr } } \right)[y_{}^y(n - y)_{}^{n - y}]_{}^{1 - \phi }[\pi /(1 - \pi )]_{}^{y\phi }} \over {\sum\limits_{j = 0}^n {\left( {\matrix{ n \cr j \cr } } \right)[j_{}^j(n - j)_{}^{n - j}]_{}^{1 - \phi }[\pi /(1 - \pi )]_{}^{j\phi }} }},y = 0,1,........,n.................(7)$$
Again, the normalizing constant in this case is the denominator expression given by $$c(n,\psi ) = \sum\limits_{j = 0}^n {\left( {\matrix{ n \cr j \cr } } \right)[j_{}^j(n - j)_{}^{n - j}]_{}^{1 - \phi }[\pi /(1 - \pi )]_{}^{j\phi }} $$


We apply the models discussed above to two frequency data and to teratology data sets having four and two treatment groups. We first present the analyses for the two frequency data sets in Tables 1 through 5. The estimation of the parameters under each model for the MgL approach uses SAS PROC NLMIXED, using the following log-likelihoods for the MBM (LL1) and DBM (LL2) respectively. The procedure was discussed earlier in the paper. $$LL1 = \log \left( {\matrix{ n \cr y \cr } } \right) + y\log (\psi ) + y(n - y)\log \omega - \log \left[ {\sum\limits_{j = 0}^n {\left( {\matrix{ n \cr j \cr } } \right)[\psi _{}^j(1 - \psi )_{}^{n - j}\omega _{}^{j(n - j)}} } \right]$$

Example Data Set I-Geissler Data


Table 1: Distribution of Males in 6115 families with 12 children

The Distribution of males in 6115 families with 12 children in Saxony, previously analyzed in Sokal & Rohlf [10] is presented in Table 1. The data is originally from Geissler [11] and had similarly been analyzed in [12]. Here Y ~ binomial (12,π ). The frequencies are presented as counts having a total sum of 6115. The observed mean for the data is y = 6.2306 and the corresponding variance is s2 = 3.4898. Under the binomial model, the estimated mean is 6.2304 and estimated variance being 2.9956. Hence the estimated dispersion parameter DP= y /s2 = 2.07 indicating over-dispersion in the data. The estimated probability of occurrence under the binomial model is π = 0.5192. The binomial does not fit the data (X2 = 110.5051 on 11 d.f., p-value=0.0000) because the variance of the data is grossly under estimated by the model. The results of the application of the double binomial and the multiplicative models to this data are presented in Table 2.

Table 2

Table 2: Parameter estimates under the five Models

Further, the mixed model approach is based on one more degree of freedom as it estimates one parameter less than the LL model approach. The Mixed model approach gives the parameter estimates of the distribution. We can obtain the equivalent parameters estimates from the Log-linear (LL) approach for the multiplicative models as follows:$$\widehat \omega = e \times p(\widehat \delta ) = e \times p( - 0.02615) = 0.9742,\widehat \psi = 1/[1 + e \times p\{ - (\widehat \delta + \widehat y)\} ] = 0.5165.......................(8)$$

For the DB,$\widehat \phi $; can equivalently be obtained as 1 −$$\widehat \theta $$ = 1 −0.140205 = 0.8598, but the estimated probability $$\widehat \pi $$seems intractable in this case and no equivalent solution is available in this case. We may note here that the estimate ψ  = 0.5165 under the multiplicative model is not an estimate of the success probabilityπ . For this data, we must use the expressions in (4) and (5) to obtain this estimate. Here, ( 1) k η− = 0.42723 and ( 1) k η− = 0.42499.

Consequently,$$\widehat \pi = \widehat \psi \left( {{{{k_{(\eta - 1)}}} \over {{k_\eta }}}} \right) = 0.5165\left( {{{0.42723} \over {0.42499}}} \right) = {\rm{ }}0.5192.$$

The mean and Variance can therefore be computed respectively from (6a) and (6b). Alternatively, the means and variances can be empirically obtained from the fitted models using the elementary principles of
$$E\left( Y \right){\rm{ }} = \sum\limits_{i = 0}^n {{y_i}} {\widehat p_i}{\rm{ }}$$ and Var of Y being E(Y2)−[E(Y)]2. These distributions are displayed in Table 3.

Table 3

Table 3:Empirical Means and Variances for both the DBM and MBM

In the above Table, Some of the columns are self explanatory. The columns labeled V1 and V2 are cumulative values of $$\sum\limits_{}^{} {y_i^2} {\widehat p_i} - \left[ {\sum\limits_{}^{} {{y_i}{{\widehat p}_i}^{}} } \right]$$ for both models respectively. Thus, the mean is the value at y = 12. The variance for the DBM for instance, is computed as 42.3116− (6.2306)2 = 3.4915.

In Table 4 are presented the expected values under both models for the two approaches (LL & MgL), both approaches give exact results as expected. The Table also displays the mean of the distributions under both approaches as well as the empirical variances, designated here as var. We recall that for the observed data in Table 1, y = 6.2306 and s2 = 3.4898.We see from Table 4, that while the two models estimate the mean of the data well, the estimated variance under the binomial model of 12(0.5192)(1−0.5192) = 2.9956 underestimates the observed variance of the data, and this explains the poor fit to the data by the binomial model. On the other hand, for the two models, the variance of the observed data are reasonably well estimated, because of the extra parameter in the model (dispersion parameter) of φ and ω for the DBM and MBM respectively.

Table 4

Table 4: Expected values under the two models and Approaches with corresponding Pearson’s X2 Statistic value

Table 4 also displays the corresponding Pearson‘s X2 and the corresponding degrees of freedom (d.f.). Clearly, for this data set, both the double binomial and the multiplicative models fit the data well with the Double binomial being slightly providing a better fit. Although the expected values generated are the same for both fitting approaches, we see that the marginal likelihood (MgL) approach gives a more parsimonious model because it is based on one more degree of freedom.

Data Example II

This example is taken from Nelder & Mead [13] and relates to the number of candidates having an “alpha”, i.e. at least 15 scores out of a total 20 points from each of nine questions employed in assessing the final class of candidates in an examination. There were a total of 209 candidates for the exam and Table 5 gives the distribution of these scores for the 209 candidates.

Table 5

Table 5: Expected Values under the two models and approaches with corresponding Pearson’s X2 Statistic Values


The results of applying both models DBM and MBM to the data using both approaches (LL and MM) are presented in Table 5. Again, both approaches lead to the same results in terms of expected values. However, the MgL models have one more degree of freedom under both models than the LL approach. Again, to get equivalent parameter estimates from the LL model, we have, for the multiplicative model,$$\widehat \omega = {\rm{ }}exp(\widehat \delta ){\rm{ }} = {\rm{ }}exp( - 0.2168){\rm{ }} = {\rm{ }}0.8051,\widehat \psi = {\rm{ }}1/[1 + exp\{ - \widehat {(\delta }( + )\widehat {y)}\} ]{\rm{ }} = {\rm{ }}0.3630$$For the double binomial, an equivalent estimate for φ is $$\widehat \phi \; = {\rm{ }}1 - \widehat \theta = {\rm{ }}1 - 0.6072{\rm{ }} = {\rm{ }}0.3928.$$ As discussed earlier, the corresponding estimate for π is not readily available. For this model, the multiplicative model is the most parsimonious and fits the data very well

Regression Model Formulations

When there are covariates in our data, the sufficient statistic approach-here into referred to as Log-linear (LL) does not lend itself to easier formulation and implementation. Lindsey and Altham [13] employed this approach to fitting amongst others, the two models considered in this study to the distribution of males in families in Saxony during 1885-1976 (the human sex ratio data). This approach employs far too many parameters, 13 to be precise when the same group of models can be implemented with only four parameters with the same results. Further, the implementations under this approach are not readily available. Thus in this study, we will employ the alternative MgL procedure that utilizes PROC NLMIXED in SAS. One advantage of this is that it will based on more degrees of freedom than the log-linear model. We see for the frequency data in Tables 1 for instance, that the LL approach is based on 1 d.f. more than the MgL model based.

Example I: Teratology-Ossification on the Phalanges

Teratology is the study of abnormalities of physiological development. The offspring of animals that were exposed to a toxin during pregnancy are studied for malformation. The number of malformed offspring in a litter of size n is not typically distributed binomial because the responses of the offspring from the same litter are not independent, hence their sum does not constitute a binomial r.v. Thus, data in teratological studies exhibit over-dispersion because of the correlation among responses from off springs in the same litter.

[14] report data from a completely randomized design that studies the teratogenicity of phenytoin in 81 pregnant mice. The treatment structure of the experiment is an augmented factorial. In addition to an untreated control, mice received 60 mg/kg of phenytoin (PHT), 100 mg/kg of trichloropropene oxide (TCPO), and their combination. The design was augmented with a control group that was treated with water. As in [15], the two control groups are combined here into a single group. The presence or absence of ossification in the phalanges on both the right and left forepaws on each of the fetuses is considered a measure of the teratogenic effect. The data is presented below. For the control for instance, there are 35 pair of observations designated as (n , r). Thus, the numbers of rats in each group are respectively {35,19,16,11}.

control 35 8 8 9 9 7 9 0 5 3 3 5 8 9 10 5 8 5 8 1 6 0 5
8 8 9 10 5 5 4 7 9 10 6 6 3 5 8 9 7 10 10 10
1 6 6 6 1 9 8 9 6 7 5 5 7 9 2 5 5 6 2 8 1 8
0 2 7 8 5 7
19 1 9 4 9 3 7 4 7 0 7 0 4 1 8 1 7
0 2 3 10 3 7 2 7 0 8 0 8 1 10 1 1
TCPO  16 0 5 7 10 4 4 8 11 6 10 6 9 3 4 2 8 0 6 0 9
3 6 2 9 7 9 1 10 8 8 6 9
PHT2   11 2 2 0 7 1 8 7 8 0 10 0 4 0 6 0 7 6 6 1 6 1 7

Suppose Yij denote the number of deaths in litter i. Further, let pij be the probability of a fetus in litter i dying. Yij has the overdispersed binomial distribution with mean ni pij and variance ni pij(1 − pij )φ , with φ characterizing the correlation between any two fetuses within the same litter.

The probability of fetal death is modeled with the logit link viz: $$\log \left( {{{{\pi _{ij}}} \over {1 - {\pi _{ij}}}}} \right) = {\beta _0} + {\beta _2}{z_{2i}} + {\beta _3}{z_{3i}} + {\beta _4}{z_{4i}}.............(9)$$

We have assumed here that β0 is similar across litters, and that, $${z_2} = \left\{ {\matrix{ 1 & {ifPHT} \cr 0 & {otherwise} \cr } } \right.,{z_3} = \left\{ {\matrix{ 1 & {ifTCPO} \cr 0 & {otherwise} \cr } } \right.,{z_4} = \left\{ {\matrix{ 1 & {ifPH{T_2}} \cr 0 & {otherwise} \cr } } \right.$$

Thus, the ‘control’ treatment is the reference category in this set up. Our analysis begins by fitting three different models to the data. These models are described briefly below:

  • The model that assumes p0 = p1 = p2 = p3 with a common dispersion parameter φ or ω for the double binomial and the multiplicative models respectively, and, where p0 and p1, p2, p3 refer respectively to the corresponding probabilities in the control and other treatment groups. Here $${p_{ij}} = {1 \over {\left[ {1 + e \times p( - {\beta _0})} \right]}},\phi = e \times p({a_0})$$    and ω = exp (c0 ) with a0 ≠ c0 . These ensure that the dispersion parameters are positive.
  • The model here has pi = pj i i ≠ j and the dispersion parameters are functions of the covariates. That is, φ = exp (a0 + a2 z2 + a3 z3 + a4 z4 ) and ω = exp(c0+c2z2+ c3z3+ c4z4 ).
  • The model here has pi ≠ pj with the ps modeled as in (9) and the dispersion parameters are modeled as functions of the covariates as in the preceding case.
  • Results

    Table 6

    Table 6:: Parameter estimates for the Models in all the Cases

    From the results in Table 6, the two cases (II & III) with variable dispersion parameters fit better than the model in case I, where the dispersion is uniform across the four groups. Of the models in Cases II and III, the models in case III fits much better than those in case II. Case II models assume that the four groups have a common estimated probabilityπ , which are estimated respectively as 0.2125 and 0.2158 in both the DBM and MBM. However, the models in III which assume heterogeneous success probabilities across the four groups and variable dispersion parameters (that are functions of the covariates) fit better than those in case II. The DBM here is based on X2 = 115.1333 on 72 d.f. The estimated π s under the MBM are functions of n, hence these values are different for different n in the final model (Case III). We may note here that the Ψs should not be mistaken for the success probabilities.

    Data Example II-Trout Egg Data

    The data in Table 7 from Manly [16] relate to the number of surviving eggs from boxes of eggs that were buried at five different locations in a stream and at four different times a box from a location was sampled. The data is presented as y/n where y is the number surviving and n is the number of eggs in the box.

    Location in stream

    Table 7: Number of Surviving eggs against number of eggs in a box

    The model of interest here is: $$\log it({p_{ij}}) = {\beta _0} + \sum\limits_{k = 1}^4 {{\beta _k}{z_k}} + \sum\limits_{l = 1}^3 {{\beta _l}{x_l}} ..................(10)$$ where zk are four dummy variables for location effects, and xl are three dummy variables representing the Time effects. The structure here is that of a randomized block design having locations as blocks and Survival times as treatments. Thus, the structure of the Pearson’s X2 would be for Location (L) and Survival time (T):


    The degree of freedom of 12 refers only to the binomial model. For all other distributions, the d.f. must account for the additional dispersion parameter estimates. Under the Binomial model X2 = 63.9639 on12 d.f, giving an estimated dispersion parameter of 5.3303 > 1, indicating that the data is highly overdispersed.

    Because of the overdispersion in the data, we now apply our models, DBM and the MBM to the data, giving the results in Table 8.

    Table 8

    Table 3:Empirical Means and Variances for both the DBM and MBM

    Models in (A) fit both the double binomial and the multiplicative binomial with constant dispersion parameter. For this group of models, the multiplicative binomial performs much better with constant dispersion parameter of 0.9884, very close to 1, indicating there is partial independence in the data ignoring the effects of locations. Models in B, have variable dispersion parameters that are functions of the covariates (Time), that is, φ = exp(a0+a1x1+a2x2+a3x3) and ωi = exp(c0+c1x1+c2x2+c3x3). Under this formulation, the double binomial computation does not converge, but that of the multiplicative binomial converged. This model gives a Pearson X2 of 5.0120 on d.f. The results of this final model are presented in Table 9. Note that for the multiplicative, the estimated probabilities of success π which are not the same as the φ in the model formulation in (3) are computed using expressions in (4) and (5). Note that $$\widehat \psi \# \widehat \pi $$. The column labeled ∑ X2 gives the cumulative contributions of observation towards X2.The value 5.0120 is the sum of all 20 contributions towards X2. Under the final multiplicative model, the estimated average probabilities of surviving in the first 4, 7, 8 and 11 weeks are respectively {0.8854,0.6999, 0.6831, 0.6656}.t

    Table 9

    Table 3:Empirical Means and Variances for both the DBM and MBM

    Example III-Teratology

    The data below is from an unpublished toxicological studies on pregnant mice, Kupper & Haseman (1978). The study is concerned with the effect of compounds on fetal death or the occurrence of some abnormalities in physiological development. Ten pregnant female mice in each of two groups (one group is the control and the other is the treated group) are employed in the study. The data is presented below for (y/n). y is the number of off springs dead in n litters.

    control 10 0/5, 2/6, 0/7, 0/7, 0/8,
    0/8, 0/8, 1/9, 2/9, 1/10
    TRT 10 0/5, 2/5, 1/7, 0/8, 2/8
    3/8, 0/9, 4/9, 1/10, 6/10.
    If we let π ij denote the probability of death for fetus j in litter i. Then, we would model this probability for both models with the logit link, viz:

    $$\log it({\pi _{ij}}) = {\beta _0} + {\beta _1}trt..............(11)$$ where (trt=1 if treatment group and 0, otherwise). Again here, we fit three competing models (17) viz:

    1. The model that assumes π1 = π1 with a common dispersion parameter φ or ω for the double binomial and the multiplicative models respectively, and, where π 0 and π1 refer respectively to the corresponding probabilities in the control and treatment groups. Here, $$\log it({\pi _{ij}}) = {\beta _0},\phi = {a_0}  and  \omega = {c_0}  with  \;\;{a_0} \ne {c_0}$$
    2. The model here has $${\pi _0} = {\pi _2}$$ and the dispersion parameters are functions of the covariate. That is, $${a_0} + {a_1}trt$$ and $$\omega = {c_0} + {c_1}trt$$
    3. The model here has $${\pi _0} \ne {\pi _1}$$with theπ S modeled as in (11) and the dispersion parameters are modeled as functions of the covariates as in preceding case.
    4. The results of these models are presented in Table 10.

      Table 10

    Table 10:Parameter estimates under the three cases for the two probability models

    SAS Program for the Example

    options nodate nonumber ls=85 ps=66;
    data ex1;
    do L=1 to 5;
    do T=1 to 4;
    input y n @@;
    end; end;
    89 94 94 98 77 86 141 155
    106 108 91 106 87 96 104 122
    119 123 100 130 88 119 91 125
    104 104 80 97 67 99 111 132
    49 93 11 113 18 88 0 138
    proc print;
    /*generate indicator variables for Location*/;
    data w1;
    set ex1;
    array x(5) z1-z5;
    do j=1 to 5;
    if j=L then x(j)=1;
    else x(j)=0;
    drop j;
    /*generate indicator variables for Time*/;
    data w2;
    set ex1;
    array d(4) x1-x4;
    do k=1 to 4;
    if k=T then d(k)=1;
    else d(k)=0;
    drop k;
    data new;
    merge w1 w2;
    proc sort data=new;
    by T;
    proc nlmixed data=new tech=newrap maxit=2000;
    parms b0=-0.1 b1=1.1 b2=0.4 b3=.1 b4=0.2 s1-s3=0.0 a0=0 a1=0
    a2=0 a3=0;
    do j=0 to n;
    u=z1+ j*log(p) + (n-j)*log(1-p) + j*(n-j)*log(omega);
    keep sum;
    LL=z2+ y*log(p) + (n-y)*log(1-p) + y*(n-y)*log(omega)-log(sum);
    model y~general(LL);
    predict p out=aa;
    predict omega out=bb;
    Ods rtf close;
    data q1;
    set aa;
    data q2;
    set bb;
    data qq4;
    merge q1 q2;
    do j=0 to n;
    u1=zz1+ j*log(psi) + (n-j)*log(1-psi) + j*(n-j)*log(omega);
    do k=0 to n-1;
    u2=zz2+ k*log(psi) + (n-k-1)*log(1-psi) + (k+1)*(n-k1)*log(omega);
    do t=0 to n-2;
    /* Generate p1,p2, expected values and variances*/;
    /* Generate Wald, LRT and Pearson’s GOFs */;
    if y=0 then lrt+0;
    else lrt+2*y*log(y/exp);
    proc print data=qq4;
    var n y psi p1 p2 exp omega var XX LRT Wald;
    format psi p1 p2 exp omega var xx LRT Wald 10.4;


    Results from Table 10 show that for cases I to III, the model for case II is the most parsimonious. The difference between the Likelihood-test statistic, G2 between models II and III being 0.2307 on 1 d.f (p-value=0.6310), which is not significant. We have used the G2 rather than the Wald or Pearson’s X2 because only the G2 statistic has the partitioning property, (see, [18]). However, while this model seems the best, it does not tell us much about the probability of success (π i , i = 0, 1) for each group. The model assumes a common success probability for both groups. Our results further indicate that we probably do not need variable dispersion parameters for both probability models, that is, a common dispersion parameter would be adequate since neither the φ orω associated with the treatment groups are significant in model III. Thus, a reduced model of case III which models the probability of success separately for the treatments but assumes a common dispersion parameter. The models are based on 16 d.f. Here, under the double binomial model, the estimated probabilities of fetal deaths for the control and experimental groups are respectively 0.0552 and 0.2332 and these estimated probabilities are constant across the treatment levels. The corresponding goodness-of-fit values are G2 = 9.1437 and X2 = 22.9671 with common dispersion parameter estimate being $$\widehat \phi $$= 0.4900. We notice a considerable discrepancy between the values of G2 and X2 for this data here. This is because, of the twenty observations in the data, nine of them have zeros for the values of Y. Consequently, these observations do not contribute to the overall G2 and this accounts for the lower values of G2 compared to their corresponding X2

    For the MBM, while the estimated ψ are specific to each treatment and constant across each treatment, the estimated probabilities $$\widehat {{\pi _1}}$$ of successes vary by the number of litters n as outlined in expression (4). Thus for n = 8, $$\widehat {{\pi _1}}$$ equals 0.0769 and 0.2469 respectively for the control and treatment groups. We present in Table 11 the estimated probabilities and other variables under the multiplicative model for this case.

    Table 11

    Table 11:: Parameter estimates under the multiplicative model with constant dispersion parameter

    We may note here that for this data, we have also computed the Wald’s test Statistic and it seems to give the lowest value of 23.2406. The GOF values are cumulated so that the last values give the sums over all observations.


    Results presented in the preceding sections showed that while it is relatively easier to fit both the double binomial and the multiplicative binomial with joint sufficient statistics employing Poisson regression for frequency data, this approach cannot easily be implemented with data having co-variates. Further, the sufficient statistics approach is based on more degrees of freedom than the MgL method, which makes the MgL method more parsimonious in all cases. We would encourage the use of the MgL methods in applications of these models to binary count data. Of the two binomial models, the multiplicative binomial seems more consistent and fits much better than the double binomial. Further, it does not have much convergence problems than the DBM.

    The SAS programs for implementing all the models discussed in this paper are readily available from the author. Meanwhile, we have attached a typical program in the appendix for implementing the MGM for the Manly data discussed in section 5.3.


    1. Lindsey JK, Altham PK (1998) Analysis of the human sex ratio by using over dispersion models. Appl Statist 47: 149–157 (Ref.)
    2. PM Altham (1978) Two generalizations of the binomial distribution. Appl Statist 27: 162–167 (Ref.)
    3. B Efron (1986) Double exponential families and their use in generalized linear regression. J Amer Statist Assoc 81: 709–721 (Ref.)
    4. Lindsey JK (1995) Modelling Frequency and Count Data. Oxford University Press.(Ref.)
    5. Lindsey JK, Mersch G (1992) Fitting and comparing probability distributions with log linear models. Computational Statistics and Data Analysis 13: 373-384. doi: 9473(92)90112-S(Ref.)
    6. Lovinson G (1998) An alternative representation of Altham’s multiplicative-binomial distribution. Statistics & Probability letters 36: 415–420. doi: 7152(97)00088-6(Ref.)
    7. DR Cox (1972) The analysis of multivariate binary data. Applied Statistics 21: 113–120(Ref.)
    8. EAH Elamir (2013) Multiplicative-binomial distribution: Some results and characterization, inference and random data generation. Journal of Statistical Theory and Applications 12: 92–105. doi: 10.2991/jsta.2013.12.1.8(Ref.)
    9. Feirer V, Friedl H, Hirn U (2013) Modeling over-and under dispersed frequencies of successful ink transmissions onto paper. Journal of Applied Statistics 40: 626-643. Doi: .1080/02664763.2012.750284 (Ref.)
    10. RR Sokal and FJ Rohlf (1969) Biometry: the Principles and Practice of Statistics in Biological Research. San Francisco: Freeman.
    11. Geissler A (1889) Contribution to the question of the sexual relationship of the born. Sachs Super numerary Bur 35: 1-24
    12. P Borges, J Rodrigues, N Balakrishnan (2014) A Com-Poisson type generalization of the Binomial distribution and its properties and applications. Statistics & Probability Letters 87: 158-166. doi:
    13. Nelder JA, Mead R (1965) A simplex method for function minimization. Computer Journal 7: 308–312(Ref.)
    14. Morel JG, Neerchal NK (1997) Clustered binary logistic regression in teratology data using a finite mixture distribution. Stat Med 16: 2843–2853 (Ref.)
    15. Morel JG, Neerchal NK (2012) Overdispersion Models in SAS. SAS publication(Ref.)
    16. Manly B (1978) Regression models for proportions with extraneous variance, Biometrie-Praximetrie. 18: 1–18
    17. LL Kupper, JK Haseman (1978) The use of a correlated binomial model for the analysis of certain toxicological experiments. Biometrics 34: 69–76
    18. Lawal HB (2003) Categorical Data Analysis with SAS and SPSS Applications. Taylor & Francis, New York.(Ref.)