The Living Thing / Notebooks :

Inference in graphical models

Given what I know about what I know, what do I know?

Introductory reading

People recommend me Koller and Friedman, which includes many different flavours of graphical model and many different methods, (KoFr09) but I personally didn’t like it. It drowned me in details without motivation, and left me feeling drained yet uninformed. YMMV.

Spirtes et al (SpGS01) and Pearl (Pear08) are readable. Murphy’s textbook (Murp12) has a minimal introduction intermixed with some related models, with a more ML, more Bayesian formalism. I’ve had Lauritzen (Laur96) recommended too, and it’s very abstract but quite clear and feels less ad hoc.

Directed graphs

Graphs of conditional, directed independence are a convenient formalism for many models.

What’s special here is how we handle independence relations and reasoning about them. In one sense there is nothing special about graphical models; it’s just a graph of which variables are conditionally independent of which others. On the other hand, that graph is a powerful analytic tool, telling you what is confounded with what, and when. Moreover, you can use conditional independence tests to construct that graph even without necessarily constructing the whole model (e.g. ZPJS12).

Once you have the graph, you can infer more detailed relations than mere conditional dependence or otherwise; this is precisely that hierarchical models emphasise.

These can even be causal graphical models, and when we can infer those we are extracting Science (ONO) from observational data. This is really interesting; see causal graphical models

Undirected, a.k.a. Markov graphs

a.k.a Markov random fields, Markov random networks. (other types?)

I would like to know about spatial Poisson random fields, Markov random fields, Bernoulli (or is it Boolean?) random fields, esp for discrete multivariate sequences. Gibbs and Boltzman distribution inference.

A smartarse connection to neural networks is in Ranz13.

Factor graphs

A unifying formalism for the directed, and undirected graphical models How does that work then?

Wikipedia

A factor graph is a bipartite graph representing the factorization of a function. In probability theory and its applications, factor graphs are used to represent factorization of a probability distribution function, enabling efficient computations, such as the computation of marginal distributions through the sum-product algorithm.

Hmm.

Chain graphs

Partially directed random fields, for some kind of definition thereof? The classic chain graph of the 80s allows you to have cycling sets of mutually influencing variables, connected by directed acyclic influence.

Implementations

Pedagogically useful, although probably not industrial-grade, David Barber’s discrete graphical model code (Julia).

Refs

Introductory reading

People recommend me Koller and Friedman, which includes many different flavours of graphical model and many different methods, (KoFr09) but I personally didn’t like it. It drowned me in details without motivation, and left me feeling drained yet uninformed. YMMV.

Spirtes et al (SpGS01) and Pearl (Pear08) are readable. Murphy’s textbook (Murp12) has a minimal introduction intermixed with some related models, with a more ML, more Bayesian formalism. I’ve had Lauritzen (Laur96) recommended too, and it’s very abstract but quite clear and feels less ad hoc.

Directed graphs

Graphs of conditional, directed independence are a convenient formalism for many models.

What’s special here is how we handle independence relations and reasoning about them. In one sense there is nothing special about graphical models; it’s just a graph of which variables are conditionally independent of which others. On the other hand, that graph is a powerful analytic tool, telling you what is confounded with what, and when. Moreover, you can use conditional independence tests to construct that graph even without necessarily constructing the whole model (e.g. ZPJS12).

Once you have the graph, you can infer more detailed relations than mere conditional dependence or otherwise; this is precisely that hierarchical models emphasise.

These can even be causal graphical models, and when we can infer those we are extracting Science (ONO) from observational data. This is really interesting; see causal graphical models

Undirected, a.k.a. Markov graphs

a.k.a Markov random fields, Markov random networks. (other types?)

I would like to know about spatial Poisson random fields, Markov random fields, Bernoulli (or is it Boolean?) random fields, esp for discrete multivariate sequences. Gibbs and Boltzman distribution inference.

A smartarse connection to neural networks is in Ranz13.

Factor graphs

A unifying formalism for the directed, and undirected graphical models How does that work then?

Wikipedia

A factor graph is a bipartite graph representing the factorization of a function. In probability theory and its applications, factor graphs are used to represent factorization of a probability distribution function, enabling efficient computations, such as the computation of marginal distributions through the sum-product algorithm.

Hmm.

Chain graphs

Partially directed random fields, for some kind of definition thereof? The classic chain graph of the 80s allows you to have cycling sets of mutually influencing variables, connected by directed acyclic influence.

Implementations

Pedagogically useful, although probably not industrial-grade, David Barber’s discrete graphical model code (Julia).

Refs