The Living Thing / Notebooks : Count models (and regression thereof)

You have data and/or predictions made up of non-negative integers \({\mathbb N }\cup\{0\}\). What models can you fit to it?

I’m collecting some appropriate models for such data so that I can do regression which does not reduce to approximating them as Gaussian, or as Bernoulli, the extreme cases usually dealt with.

Also, is there a count-based formulation for non-negative regression? Non-negative matrix factorisations with appropriate loss function perhaps?

TODO:

  1. raid the document topic model literature for this. Surely they are implicitly count data? See string bags, compare with Steyvers and Tenenbaum’s semantic network model (StTe05).
  2. robust regression for all of these

All the distributions I discuss here have support unbounded above. Bounded distributions (e.g. vanilla Binomial) are for some other time. The exception is the Bernoulli RV, a.v. the biassed coin, which is simple enough to sneak in.

For the details of these models in a time series context, see

A lot of this material is probably in JoKK05.

Poisson

The Poisson is reminiscent of the Gaussian for count data, in terms of the number of places that it pops up, and the vast number of things that have a limiting Poisson distribution.

Conveniently, it has only one parameter, which means you don’t need to justify how you chose any other parameters, saving valuable time and thought. It’s useful as a “null model”, in that the number of particles in realisation of a point process without interaction will be Poisson-distributed, conditional upon the mean measure. Conversely, non-Poisson residuals are evidence that your model has failed to take out some kind of interaction or hidden variable.

Spelled
\(\text{Poisson}(\lambda)\)
Pmf
\(\operatorname{P}(k;\lambda)={\frac {\lambda ^{k}}{k!}}e^{{-\lambda }}\)
Mean
\(\lambda\)
Variance
\(\lambda\)
Pgf
\(G(s;\lambda)=\exp(\lambda (s-1))\)

Negative Binomial

Nearly a century old! (GrYu20) A generic count data model which, unlike the Poisson, has both location-like and scale-like parameters, instead of only one parameter. This makes one feel less dirty about a using a model more restrictive than standard linear regression, which sets the benchmark for being castigated for being too restrictive. Has a traditional rationale in terms of drawing balls from urns and such, which is of little interest here. the key point is that it is both flexible and uncontroversial.

It includes Geometric as special cases when \(r=1\), and the Gamma and Poisson as limiting cases. More precisely, the Poisson is a limiting case of the Polya distribution. for example, in the large-k limit, it approximates the Gamma distribution, and, when the mean is held constant, in the large \(r\) limit, it approaches Poisson. For fixed r it is an exponential family.

For all that, it’s still contrived, this model, and tedious to fit.

Spelled
\(\text{NB}(p,r)\)
Pmf
\(\operatorname{P}(k;p,r) = {k+r-1 \choose k}\cdot (1-p)^{r}p^{k} = \frac{\Gamma(k+r)}{k!\,\Gamma(r)} (1-p)^rp^k\)
Mean
\({\frac {pr}{1-p}}\)
Variance
\({\frac {pr}{(1-p)^{2}}}\)
Pgf
\(G_{NB}(s;p,r)=\left({\frac {1-p}{1-ps}}\right)^{{\!r}}{\text{ for }}|s|<{\frac 1p}\)

Mean/dispersion parameterisation (Polya)

Commonly, where the \(r\) parameter is not required to be a non-negative integer, we call it a Polya model, and use it for over-dispersed data i.e. data that looks like a Poisson process if we are drunk, but whose variance is too big in comparison to its mean for soberer minds.

To see how that works we will reparameterise the model in terms of a “location” parameter \(\lambda\) and “dispersion”/scale-ish parameter \(\alpha\), such that we can rewrite it.

Spelled
\(\text{Polya}(\lambda,\alpha)\)
Pmf
\(\operatorname{P}(k;\lambda,\alpha) = \frac{\Gamma\left(k+\frac{1}{\alpha}\right)}{\Gamma(k+1)\Gamma\left(\frac{1}{\alpha}\right)} \left(\frac{\lambda}{\lambda+\frac{1}{\alpha}}\right)^k\left(1+\lambda\alpha\right)^{-\frac{1}{\alpha}}\)
Mean
\(\lambda\)
Variance
\(\lambda + \lambda^2\alpha\)
Pgf
\(G(s;\lambda,\alpha)=(\alpha (\lambda -\lambda s)+1)^{-1/\alpha }\)

The log Pmf is then

\begin{equation*} \log\Gamma\left(k+\frac{1}{\alpha}\right)-\log\Gamma(k+1)-\log\Gamma\left(\frac{1}{\alpha}\right)+k\left(\log\lambda-\log\left(\lambda+\frac{1}{\alpha}\right)\right)-\frac{\log(1+\lambda\alpha)}{\alpha} \end{equation*}

It will be apparent that all these log-gamma differences will be numerically unstable, so we need to use different approximations for it depending in the combination of \(k,\lambda\) or \(\alpha\) values in play.

Aieee! Tedious!

I can’t understand why anyone bothers using the negative binomial as a model; the GPD is easier to fit, in that the log Pmf is numerically tractable for all parameter combinations even with very large count values, and it’s no less natural; it even has more, IMO, plausible justifications than the Polya/NB model.

Geometric

A discrete analogue of the exponential, i.e. The probability distribution of the number X of Bernoulli trials before the first success, supported on the set \(\{0, 1, 2, 3, ...\}\)

Spelled
\(\text{Geom}(p)\)
Pmf
\(\operatorname{P}(k;p) = (1-p)^k\,p\!\)
Pgf
\(G(s;p)=\frac{p}{1-s+sp}.\)
Mean
\(\frac {1-p}{p}\)
Variance
\(\frac {1-p}{p^{2}}\)

Note that \({\text{Geom}}(p)={\text{NB}}(1,\,1-p).\,\).

Mean parameterisation

We can parameterise this in terms of the mean \(\lambda=\frac {1-p}{p}\Rightarrow p=\frac{1}{\lambda+1}\)

Spelled
\(\text{MGeom}(\lambda)\)
Pmf
\(\operatorname{P}(k;\lambda) = \left(\frac{\lambda}{\lambda+1}\right)^k\frac{1}{\lambda+1}\)
Pgf
\(G(s;\lambda)=\frac{\lambda+1}{s\lambda+1}.\)
Mean
\(\lambda\)
Variance
\(\lambda^2+\lambda\)

Lagrangian distributions

Another clade of distribution where we work backwards from the pgf, although we generate the distribution from a function this time, or rather, two functions. This family includes various others on this page; I will check which some day. For now, let’s get to the interesting new ones.

For it is more like a jungle of distributions, requiring a map to hack through it. There are various parameters estimation methods and properties, all applicable to differnt sub-families. Sometimes the forms of the mass function are explicit and easy. Others, not so much.

See CoFa06 for the authoritative list, plus JoKK05 Ch7.2 for a brusquer version.

It’s interesting to me because (CoFa06 Ch 6.2, CoSh88) the total cascade size of a subcritical branching process has a “delta Lagrangian” or “general Lagrangian” distribution, depending on whether the cluster has a deterministic or random starting population. We will define offspring distribution of such a branching process as \(G\sim G_Y(\eta, \alpha)\), with \(EG:=\eta\lt 1\).

Let’s get specific.

Poisson-Poisson Lagrangian

See Consul and Famoye (CoFa06, 9.3). Also known as the Generalised Poisson, although there are many things called that.

there are many possible interpretations for this; I will choose the interpretation in terms of cascade sizes of branching processes, in which case we have

  • Poisson(\(\mu\)) initial distribution,
  • Poisson(\(\eta\)) offspring distribution.

Then…

Spelled
\(GPD(\mu,\eta)\)
Pmf
\(\operatorname{P}(X=x;\mu,\eta)=\frac{\mu(\mu+ \eta x)^{x-1}}{x!e^{\mu+x\eta}}\)
Mean
\(\frac{\mu}{1-\eta}\)
Variance
\(\frac{\mu}{(1-\eta)^3}\)

Notice that this can produce long tails, in the sense that it can have a very large variance with finite mean, but not heavy tails, in the sense of the variance becoming infinite while retaining a finite mean. (Q: What non-negative distribution has the quality of parameterised explosion of moments, apart from the inconvenient discrete-stable?)

Here, I implemented the Poisson-Poisson GPD in python for you.

Basic Lagrangian distribution

Summarised in CoSh72.

One parameter: a differentiable (infinitely?) function, not necessarily a pgf, \(g: [0,1]\rightarrow \mathbb{R} \text{ s.t. } g(0)\neq 0\text{ and } g(1)=1\). Now we define a pgf \(\psi(s)\) implicitly by the smallest root of the Lagrange transformation \(z=sg(z)\). The paradigmatic example of such a function is \(g:z\mapsto 1-p+pz\); let’s check this fella out.

Spelled
?
Pmf
?
Mean
?
Variance
?

TBD

Delta Lagrangian distributions

TBD

General Lagrangian distributions

TBD

Discrete Stable

Another generalisation of Poisson, with until-recently hip features such as a power-law tail.

By analogy with the continuous-stable distribution, a “stable” family for count data.

In particular, this is stable in the sense that it is a limiting distribution for sums of count random variables, analogous to the continuous stable family for real-valued RVs.

No (convenient) closed form for the Pmf in the general case, but the Pgf is simple, so that’s somthing.

Spelled
\(\text{DS}(\nu,a)\)
Pmf
\(\operatorname{P}(k;\nu,a)= \left.\frac{1}{k!} \frac{d^kG(s;\nu,a)}{ds^k}\right|_{s=0}.\) (which is simply the usual formula for extracting the Pmf from any Pgf.)
Pgf
\(G(s;\nu,a)=\exp(-a (1-s)^\nu).\)
Mean
\(\operatorname{E}(X) = G'(1^-) = a \nu e^{-a}.\)
Variance
\(\operatorname{Var}(X)=G''(1^-) + \operatorname{E}(X) - \operatorname{E}^2(X).\)

Here, \(a>0\) is a scale parameter and \(0\lt\nu\leq 1\) a dispersion parameter describing in particular a power-law tail such that when \(\nu\lt 1\),

\begin{equation*} \lim_{k \to \infty}\operatorname{P}(k;\nu,a) \simeq \frac{1}{k^{\nu+1}}. \end{equation*}

Question: the Pgf formulation implies this is a non-negative distribution. Does that mean that symmetric discrete RVs cannot be stable? Possibly-negative ones?

Nola99 and Nola01 give some approximate ML-estimators of the parameter. Lee10 does some interesting stuff:

This thesis considers the interplay between the continuous and discrete properties of random stochastic processes. It is shown that the special cases of the one-sided Lévy-stable distributions can be connected to the class of discrete-stable distributions through a doubly-stochastic Poisson transform. This facilitates the creation of a one-sided stable process for which the N-fold statistics can be factorised explicitly. […] Using the same Poisson transform interrelationship, an exact method for generating discrete-stable variates is found. It has already been shown that discrete-stable distributions occur in the crossing statistics of continuous processes whose autocorrelation exhibits fractal properties. The statistical properties of a nonlinear filter analogue of a phase-screen model are calculated, and the level crossings of the intensity analysed. […] The asymptotic properties of the inter-event density of the process are found to be accurately approximated by a function of the Fano factor and the mean of the crossings alone.

Zipf/Zeta models

The discrete version of the basic power-law models.

While we are here, the plainest explanation of the relation of Zips to Pareto distribution that I know is Lada Adamic’s Zipf, Power-laws, and Pareto - a ranking tutorial.

Spelled
\(\text{Zipf}(s)\)
Pmf
\(\operatorname{P}(k;s)={\frac {1/k^{s}}{\zeta (s)}}\)
Mean
\({\frac {\zeta (s-1)}{\zeta (s)}}~{\textrm {for}}~s>2\)
Variance
\({\frac {\zeta (s)\zeta (s-2)-\zeta (s-1)^{2}}{\zeta (s)^{2}}}~{\textrm {for}}~s>3\)

This has unbounded support. In the bounded case, it becomes the Zipf–Mandelbrot law, which is too fiddly for me to discuss here unless I turn out to really need it, which would likely be for ranking statistics.

Yule-Simon

Spelled
\(\text{YS}(\rho)\)
Pmf
\(\operatorname{P}(k;\rho)=\rho \,{\mathrm {B}}(k,\rho +1),\,\)
Mean
\({\frac {\rho }{\rho -1}},\, \rho \gt 1\)
Variance
\({\frac {\rho^2}{(\rho-1)^2\;(\rho -2)}}\,,\, \rho \gt 2\)

where B is the beta function.

Zipf law in the tail. See also the two-parameter version, which replaces the beta function with an incomplete beta function, giving Pmf \(\operatorname{P}(k;\rho,\alpha )={\frac {\rho }{1-\alpha ^{{\rho }}}}\;{\mathrm {B}}_{{1-\alpha }}(k,\rho +1),\,\)

I’m bored with this one too.

Conway-Maxwell-Poisson

Exponential family count model with free variance parameter. See CHSH16.

Divisibility, decomposabliity, stability

And decomposability, and stability. TODO: disambiguate these notions. Clarify what it means for discrete- vs continuous-valued RVs.

Stability

See HaSt93.

A well-known distribution construction: the stable distribution class.

For continuous-valued continuous/discrete indexed stochastic process, \(X(t)\), \(\alpha\)-stability implies that the law of the value of the process at certain times satisfies a stability equation

\begin{equation*} X(a) \triangleq W^{1/\alpha}X(b), \end{equation*}

where \(0 < a < b\), \(\alpha> 0\) and \(W\sim \mathrm{Unif}([0,1])\perp X\). (see )

The marginal distributions of such processes are those of the \(\alpha\)-stable processes. For \(\alpha=2\) we have Gaussians and for \(\alpha=1\), the Cauchy law.

(NB - how does this reduce to the usual linear-combination formulation?)

By analogy we may construct a stability equation for count RVs:

\begin{equation*} X(a) \triangleq W^{1/\alpha}\odot X(b), \end{equation*}

\(\odot\) here is Steutel and van Harn’s discrete multiplication operator, which I won’t define here exhaustively because there are variously complex formulations of it, and I don’t care enough to wrangle them. In the simplest case it gives us a binomial thinning of the left operand by the right

\begin{equation*} A\odot B \sim \mathrm{Binom}(A,B) \end{equation*}

Self-divisibility

Poisson RVs are self-divisible, in the sense that

\begin{equation*} X_1\sim \text{Poisson}(\lambda_1),\,X_2\sim \text{Poisson}(\lambda_2),\,X_1\perp X_2 \Rightarrow X_1+X_2\sim \text{Poisson}(\lambda_1+\lambda_2). \end{equation*}

Polya RVs likewise are self-divisible, if uglier under this parameterisation

\begin{equation*} X_1\sim \text{Polya}(\lambda_1, \alpha_1),\,X_2\sim \text{Polya}(\lambda_1, \alpha_1),\,X_1\perp X_2 \end{equation*}
\begin{equation*} \Rightarrow \end{equation*}
\begin{equation*} X_1+X_2\sim \text{Polya}\left(\lambda_1+\lambda_2,\, \frac{\alpha_1\lambda_1^2+\alpha_2\lambda_2^2}{(\lambda_1+\lambda_2)^2}\right) \end{equation*}
\begin{equation*} \operatorname{Var}(X_1+X_2)=\operatorname{Var}(X_1)+\operatorname{Var}(X_2) \end{equation*}

So are GPDs, in \(\lambda\).

Read!

Blak00
Blaker, H. (2000) Confidence Curves and Improved Exact Confidence Intervals for Discrete Distributions. The Canadian Journal of Statistics / La Revue Canadienne de Statistique, 28(4), 783–798. DOI.
CaXi15
Cao, Y., & Xie, Y. (2015) Poisson Matrix Recovery and Completion. arXiv:1504.05229 [Cs, Math, Stat].
ChSh16
Chatla, S. B., & Shmueli, G. (2016) Modeling Big Count Data: An IRLS Framework for CMP Regression and GAM. arXiv:1610.08244 [Stat].
Cons88
Consul, P. C.(1988) Generalized Poisson Distributions. . New York: CRC Press
CoFa92
Consul, P. C., & Famoye, F. (1992) Generalized poisson regression model. Communications in Statistics - Theory and Methods, 21(1), 89–109. DOI.
CoFa06
Consul, P. C., & Famoye, F. (2006) Lagrangian probability distributions. . Boston: Birkhäuser
CoFe89
Consul, P. C., & Felix, F. (1989) Minimum variance unbiased estimation for the lagrange power series distributions. Statistics, 20(3), 407–415. DOI.
CoJa73
Consul, P. C., & Jain, G. C.(1973) A Generalization of the Poisson Distribution. Technometrics, 15(4), 791–799. DOI.
CoSh73
Consul, P. C., & Shenton, L. R.(1973) Some interesting properties of Lagrangian distributions. Communications in Statistics, 2(3), 263–272. DOI.
CoSh75
Consul, P. C., & Shenton, L. R.(1975) On the Probabilistic Structure and Properties of Discrete Lagrangian Distributions. In G. P. Patil, S. Kotz, & J. K. Ord (Eds.), A Modern Course on Statistical Distributions in Scientific Work (pp. 41–57). Springer Netherlands DOI.
CoSh84
Consul, P. C., & Shoukri, M. M.(1984) Maximum likelihood estimation for the generalized poisson distribution. Communications in Statistics - Theory and Methods, 13(12), 1533–1547. DOI.
CoSh88
Consul, P. C., & Shoukri, M. M.(1988) Some Chance Mechanisms Related to a Generalized Poisson Probability Model. American Journal of Mathematical and Management Sciences, 8(1–2), 181–202. DOI.
CoSh72
Consul, P., & Shenton, L. (1972) Use of Lagrange Expansion for Generating Discrete Generalized Probability Distributions. SIAM Journal on Applied Mathematics, 23(2), 239–248. DOI.
Cox83
Cox, D. R.(1983) Some remarks on overdispersion. Biometrika, 70(1), 269–274. DOI.
DoJL09
Doray, L. G., Jiang, S. M., & Luong, A. (2009) Some Simple Method of Estimation for the Parameters of the Discrete Stable Distribution with the Probability Generating Function. Communications in Statistics - Simulation and Computation, 38(9), 2004–2017. DOI.
Free80
Freeman, G. H.(1980) Fitting Two-Parameter Discrete Distributions to Many Data Sets with One Common Parameter. Journal of the Royal Statistical Society. Series C (Applied Statistics), 29(3), 259–267. DOI.
Garc11
Garcia, J. M. G.(2011) A fixed-point algorithm to estimate the Yule–Simon distribution parameter. Applied Mathematics and Computation, 217(21), 8560–8566. DOI.
GoMY04
Goldstein, M. L., Morris, S. A., & Yen, G. G.(2004) Problems with fitting to the power-law distribution. The European Physical Journal B - Condensed Matter and Complex Systems, 41(2), 255–258. DOI.
GrYu20
Greenwood, M., & Yule, G. U.(1920) An Inquiry into the Nature of Frequency Distributions Representative of Multiple Happenings with Particular Reference to the Occurrence of Multiple Attacks of Disease or of Repeated Accidents. Journal of the Royal Statistical Society, 83(2), 255–279. DOI.
HoJM02
Hopcraft, K. I., Jakeman, E., & Matthews, J. O.(2002) Generation and monitoring of a discrete stable random process. Journal of Physics A: Mathematical and General, 35(49), L745. DOI.
HoJM04
Hopcraft, K. I., Jakeman, E., & Matthews, J. O.(2004) Discrete scale-free distributions and associated limit theorems. Journal of Physics A: Mathematical and General, 37(48), L635. DOI.
HLSG09
Hubert, P. C., Lauretto, M. S., Stern, J. M., Goggans, P. M., & Chan, C.-Y. (2009) FBST for generalized Poisson distribution. In AIP Conference Proceedings (Vol. 1193, p. 210).
Imot16
Imoto, T. (2016) Properties of Lagrangian distributions. Communications in Statistics - Theory and Methods, 45(3), 712–721. DOI.
Jana84
Janardan, K. (1984) Moments of Certain Series Distributions and Their Applications. SIAM Journal on Applied Mathematics, 44(4), 854–868. DOI.
JoKK05
Johnson, N. L., Kemp, A. W., & Kotz, S. (2005) Univariate discrete distributions. (3rd ed.). Hoboken, N.J: Wiley
Lee10
Lee, W. H.(2010, July) Continuous and discrete properties of stochastic processes.
LeHJ08
Lee, W. H., Hopcraft, K. I., & Jakeman, E. (2008) Continuous and discrete stable processes. Physical Review E, 77(1), 011109. DOI.
LiFL10
Li, S., Famoye, F., & Lee, C. (2010) On the generalized Lagrangian probability distributions. Journal of Probability and Statistical Science, 8(1), 113–123.
Lloy07
Lloyd-Smith, J. O.(2007) Maximum Likelihood Estimation of the Negative Binomial Dispersion Parameter for Highly Overdispersed Data, with Applications to Infectious Diseases. PLoS ONE, 2(2), e180. DOI.
MSFH06
Mailier, P. J., Stephenson, D. B., Ferro, C. A. T., & Hodges, K. I.(2006) Serial Clustering of Extratropical Cyclones. Monthly Weather Review, 134(8), 2224–2240. DOI.
Muta95
Mutafchiev, L. (1995) Local limit approximations for Lagrangian distributions. Aequationes Mathematicae, 49(1), 57–85. DOI.
Neym65
Neyman, J. (1965) Certain Chance Mechanisms Involving Discrete Distributions. Sankhyā: The Indian Journal of Statistics, Series A (1961-2002), 27(2/4), 249–258.
Nola97a
Nolan, J. P.(1997a) Numerical calculation of stable densities and distribution functions. Communications in Statistics. Stochastic Models, 13(4), 759–774. DOI.
Nola97b
Nolan, J. P.(1997b) Parameter estimation and data analysis for stable distributions. (Vol. 1, pp. 443–447). IEEE Comput. Soc DOI.
Nola99
Nolan, J. P.(1999) Fitting data and assessing goodness-of-fit with stable distributions.
Nola01
Nolan, J. P.(2001) Maximum Likelihood Estimation and Diagnostics for Stable Distributions. In O. E. Barndorff-Nielsen, S. I. Resnick, & T. Mikosch (Eds.), Lévy Processes (pp. 379–400). Birkhäuser Boston DOI.
Nola00
Nolan, J. P.(n.d.) Modeling financial data with stable distributions.
Pieg90
Piegorsch, W. W.(1990) Maximum likelihood estimation for the negative binomial dispersion parameter. Biometrics, 863–867.
SaPa05
Saha, K., & Paul, S. (2005) Bias-corrected maximum likelihood estimator of the negative binomial dispersion parameter. Biometrics, 61(1), 179–185. DOI.
SYSS06
Santhanam, G., Yu, B. M., Shenoy, K. V., & Sahani, M. (2006) Factor analysis with Poisson output. . Technical Report NPSL-TR-06-1. Stanford, CA: Stanford Univ
SMKB05
Shmueli, G., Minka, T. P., Kadane, J. B., Borle, S., & Boatwright, P. (2005) A Useful Distribution for Fitting Discrete Data: Revival of the Conway-Maxwell-Poisson Distribution. Journal of the Royal Statistical Society. Series C (Applied Statistics), 54(1), 127–142.
ShCo87
Shoukri, M. M., & Consul, P. C.(1987) Some Chance Mechanisms Generating the Generalized Poisson Probability Models. In I. B. MacNeill, G. J. Umphrey, A. Donner, & V. K. Jandhyala (Eds.), Biostatistics (pp. 259–268). Dordrecht: Springer Netherlands
SiMS94
Sibuya, M., Miyawaki, N., & Sumita, U. (1994) Aspects of Lagrangian Probability Distributions. Journal of Applied Probability, 31, 185–197. DOI.
SoSA09
Soltani, A. R., Shirvani, A., & Alqallaf, F. (2009) A class of discrete distributions induced by stable laws. Statistics & Probability Letters, 79(14), 1608–1614. DOI.
StHa79
Steutel, F. W., & van Harn, K. (1979) Discrete Analogues of Self-Decomposability and Stability. The Annals of Probability, 7(5), 893–899. DOI.
StTe05
Steyvers, M., & Tenenbaum, J. B.(2005) The Large-Scale Structure of Semantic Networks: Statistical Analyses and a Model of Semantic Growth. Cognitive Science, 29(1), 41–78. DOI.
Tsou06
Tsou, T.-S. (2006) Robust Poisson regression. Journal of Statistical Planning and Inference, 136(9), 3173–3186. DOI.
Tuen00
Tuenter, H. J. H.(2000) On the Generalized Poisson Distribution. Statistica Neerlandica, 54(3), 374–376. DOI.
TuSB14
Turkman, K. F., Scotto, M. G., & Bermudez, P. de Z. (2014) Models for Integer-Valued Time Series. In Non-Linear Time Series (pp. 199–244). Springer International Publishing
HaSt93
van Harn, K., & Steutel, F. W.(1993) Stability equations for processes with stationary independent increments using branching processes and Poisson mixtures. Stochastic Processes and Their Applications, 45(2), 209–230. DOI.
HaSV82
van Harn, K., Steutel, F. W., & Vervaat, W. (1982) Self-decomposable discrete distributions and branching processes. Zeitschrift Für Wahrscheinlichkeitstheorie Und Verwandte Gebiete, 61(1), 97–118. DOI.
ViVS10
Villarini, G., Vecchi, G. A., & Smith, J. A.(2010) Modeling the Dependence of Tropical Storm Counts in the North Atlantic Basin on Climate Indices. Monthly Weather Review, 138(7), 2681–2705. DOI.
VSCM09
Vitolo, R., Stephenson, D. B., Cook, I. M., & Mitchell-Wallace, K. (2009) Serial clustering of intense European storms. Meteorologische Zeitschrift, 18(4), 411–424. DOI.
WeBK03
Wedel, M., Böckenholt, U., & Kamakura, W. A.(2003) Factor models for multivariate count data. Journal of Multivariate Analysis, 87(2), 356–369. DOI.