The Living Thing / Notebooks :

Causal graphical models

Reproduced from James F Fixx’s puzzle book, (Fixx77) found in a recycling bin:

Farklers
Farklers

The danger of folk statistics. The problems of excluded variables.

Directed graphical models with the additional assumption that \(A\rightarrow B\) may be read as “A causes a change in B”.

Observational studies, confounding, adjustment criteria, d-separation, confounding, identifiability, interventions, moral equivalence, identification of hidden variables.

When can I use my crappy observational data, collected without a good experimental design for whatever reason, to do interventional inference? There is a lot of research in this. I should summarise the salient bits for myself. In fact I did; I did a reading group on this. See also quantum causal graphical models, and the use of classical causal graphical models to eliminate hidden quantum causes. “With great spreadsheets comes great responsibility.”

Avoidance of Ecological fallacy in mean-field approximation. Simpson’s paradox.

Spurious correlation induced by sampling bias

See also graphical models, hierarchical models.

“With great spreadsheets comes great responsibility.”

The danger of folk statistics. The problems of excluded variables.

Avoidance of Ecological fallacy in mean-field approximation. Simpson’s paradox.

Spurious correlation induced by sampling bias

See also graphical models, hierarchical models.

Gwern on Causality: >I speculate that in realistic causal networks or DAGs, the number of possible correlations grows faster than the number of possible causal relationships. So confounds really are that common, and since people do not think in DAGs, the imbalance also explains overconfidence.

Tutorials online

Tutorial: David Sontag and Uri Shalit, Causal inference from observational studies.

Felix Elwert’s summary is punchy. (Elwe13)

Chapter 3 of (some edition of) Pearl’s book is availalbe as an author’s preprint: Parts 1, 2, 3, 4, 5, 6.

Counterfactuals

TBD.

Propensity scores

RuWa06 comes recommended by Shalizi as:

A good description of Rubin et al.’s methods for causal inference, adapted to the meanest understanding. […] Rubin and Waterman do a very good job of explaining, in a clear and concrete problem, just how and why the newer techniques of causal inference are valuable, with just enough technical detail that it doesn’t seem like magic.

Causal Graph inference from data

Uh oh. You don’t know what causes what? Or specifically, you can’t eliminate a whole bunch of potential causal arrows a priori? Much more work.

Here is a seminar I noticed on this theme, which is also a lightspeed introduction to some difficulties.

Guido Consonni, Objective Bayes Model Selection of Gaussian Essential Graphs with Observational and Interventional Data.

Graphical models based on Directed Acyclic Graphs (DAGs) represent a powerful tool for investigating dependencies among variables. It is well known that one cannot distinguish between DAGs encoding the same set of conditional independencies (Markov equivalent DAGs) using only observational data. However, the space of all DAGs can be partitioned into Markov equivalence classes, each being represented by a unique Essential Graph (EG), also called Completed Partially Directed Graph (CPDAG). In some fields, in particular genomics, one can have both observational and interventional data, the latter being produced after an exogenous perturbation of some variables in the system, or from randomized intervention experiments. Interventions destroy the original causal structure, and modify the Markov property of the underlying DAG, leading to a finer partition of DAGs into equivalence classes, each one being represented by an Interventional Essential Graph (I-EG) (Hauser and Buehlmann). In this talk we consider Bayesian model selection of EGs under the assumption that the variables are jointly Gaussian. In particular, we adopt an objective Bayes approach, based on the notion of fractional Bayes factor, and obtain a closed form expression for the marginal likelihood of an EG. Next we construct a Markov chain to explore the EG space under a sparsity constraint, and propose an MCMC algorithm to approximate the posterior distribution over the space of EGs. Our methodology, which we name Objective Bayes Essential graph Search (OBES), allows to evaluate the inferential uncertainty associated to any features of interest, for instance the posterior probability of edge inclusion. An extension of OBES to deal simultaneously with observational and interventional data is also presented: this involves suitable modifications of the likelihood and prior, as well as of the MCMC algorithm. We conclude by presenting results for simulated and real experiments (protein-signaling data).

This is joint work with Federico Castelletti, Stefano Peluso and Marco Della Vedova (Universita’ Cattolica del Sacro Cuore).

Causal time series DAGS

As with other time series methods, has its own issues.

TODO: find out how this works: Causal impact. (Based on BGKR15_.)

The CausalImpact R package implements an approach to estimating the causal effect of a designed intervention on a time series. For example, how many additional daily clicks were generated by an advertising campaign? Answering a question like this can be difficult when a randomized experiment is not available. The package aims to address this difficulty using a structural Bayesian time-series model to estimate how the response metric might have evolved after the intervention if the intervention had not occurred.

Refs