Inference on graphical models

Given what I know about what I know, what do I know?

Usefulness: 🔧
Novelty: 💡
Uncertainty: 🤪 🤪 🤪
Incompleteness: 🚧 🚧 🚧

(Barber 2012; Lauritzen 1996) are rigorous abstract introductions. (Murphy 2012) has a minimal introduction intermixed with some related models, with a more ML, more Bayesian formalism. For use in causality, (Pearl 2009; Spirtes, Glymour, and Scheines 2001) are readable.

People recommend me (Koller and Friedman 2009) which is probably the most detailed an comprehensive, but I found it was hard to see the forest for the trees in this one. YMMV.

What’s special here is how we handle independence relations and reasoning about them. In one sense there is nothing special about graphical models; it’s just a graph of which variables are conditionally independent of which others. On the other hand, that graph is a powerful analytic tool, telling you what is confounded with what, and when. Moreover, you can use conditional independence tests to construct that graph even without necessarily constructing the whole model (e.g. (Zhang et al. 2012)).

Directed graphs

Graphs of conditional, directed independence are a convenient formalism for many models. These are also called Bayes nets (not to be confused with Bayesian inference.)

Once you have the graph, you can infer more detailed relations than mere conditional dependence or otherwise; this is precisely that hierarchical models emphasise.

These can even be causal graphical models, and when we can infer those we are extracting Science (ONO) from observational data. This is really interesting; see causal graphical models

BayesNets is a Julia package for reasoning over directed graphical models.

Undirected, a.k.a. Markov graphs

a.k.a Markov random fields, Markov random networks. (other types?)

I would like to know about spatial Poisson random fields, Markov random fields, Bernoulli (or is it Boolean?) random fields, esp for discrete multivariate sequences. Gibbs and Boltzman distribution inference.

Wasserman’s explanation of the use case here is good: Estimating Undirected Graphs Under Weak Assumptions

Factor graphs

A unifying formalism for the directed and undirected graphical models How does that work then?

Wikipedia

A factor graph is a bipartite graph representing the factorization of a function. In probability theory and its applications, factor graphs are used to represent factorization of a probability distribution function, enabling efficient computations, such as the computation of marginal distributions through the sum-product algorithm.

Hmm.

Implementations

Pedagogically useful, although probably not industrial-grade, David Barber’s discrete graphical model code (Julia).

Refs

Altun, Yasemin, Alex J. Smola, and Thomas Hofmann. 2004. “Exponential Families for Conditional Random Fields.” In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, 2–9. UAI ’04. Arlington, Virginia, United States: AUAI Press. http://arxiv.org/abs/1207.4131.

Aragam, Bryon, Jiaying Gu, and Qing Zhou. 2017. “Learning Large-Scale Bayesian Networks with the Sparsebn Package,” March. http://arxiv.org/abs/1703.04025.

Aragam, Bryon, and Qing Zhou. 2015. “Concave Penalized Estimation of Sparse Gaussian Bayesian Networks.” Journal of Machine Learning Research 16: 2273–2328. http://jmlr.org/papers/v16/aragam15a.html.

Aral, Sinan, Lev Muchnik, and Arun Sundararajan. 2009. “Distinguishing Influence-Based Contagion from Homophily-Driven Diffusion in Dynamic Networks.” Proceedings of the National Academy of Sciences 106 (51): 21544–9. https://doi.org/10.1073/pnas.0908800106.

Arnold, Barry C., Enrique Castillo, and Jose M. Sarabia. 1999. Conditional Specification of Statistical Models. Springer Science & Business Media. https://books.google.com.au/books?hl=en&lr=&id=lKeKu_HtMdQC&oi=fnd&pg=PA1&dq=arnold+castillo+sarabia+conditional+specification+of+statistical+models&ots=gxWoVEdsde&sig=p0BJlEeB5yQ052m5YhfQ_A6Kmoo.

Baddeley, Adrian J, Jesper Møller, and Rasmus Plenge Waagepetersen. 2000. “Non- and Semi-Parametric Estimation of Interaction in Inhomogeneous Point Patterns.” Statistica Neerlandica 54 (3): 329–50. https://doi.org/10.1111/1467-9574.00144.

Baddeley, Adrian, and Jesper Møller. 1989. “Nearest-Neighbour Markov Point Processes and Random Sets.” International Statistical Review / Revue Internationale de Statistique 57 (2): 89–121. https://doi.org/10.2307/1403381.

Baddeley, A. J., and Marie-Colette NM Van Lieshout. 1995. “Area-Interaction Point Processes.” Annals of the Institute of Statistical Mathematics 47 (4): 601–19. https://doi.org/10.1007/BF01856536.

Baddeley, A. J., Marie-Colette NM Van Lieshout, and J. Møller. 1996. “Markov Properties of Cluster Processes.” Advances in Applied Probability 28 (2): 346–55. https://doi.org/10.2307/1428060.

Barber, David. 2012. Bayesian Reasoning and Machine Learning. Cambridge ; New York: Cambridge University Press. http://www.cs.ucl.ac.uk/staff/d.barber/brml/.

Bareinboim, Elias, Jin Tian, and Judea Pearl. 2014. “Recovering from Selection Bias in Causal and Statistical Inference.” In AAAI, 2410–6. http://ftp.cs.ucla.edu/pub/stat_ser/r425.pdf.

Bartolucci, Francesco, and Julian Besag. 2002. “A Recursive Algorithm for Markov Random Fields.” Biometrika 89 (3): 724–30. https://doi.org/10.1093/biomet/89.3.724.

Besag, Julian. 1974. “Spatial Interaction and the Statistical Analysis of Lattice Systems.” Journal of the Royal Statistical Society. Series B (Methodological) 36 (2): 192–236.

———. 1975. “Statistical Analysis of Non-Lattice Data.” Journal of the Royal Statistical Society. Series D (the Statistician) 24 (3): 179–95. https://doi.org/10.2307/2987782.

———. 1986. “On the Statistical Analysis of Dirty Pictures.” Journal of the Royal Statistical Society. Series B (Methodological) 48 (3): 259–302.

Bishop, Christopher M. 2006. Pattern Recognition and Machine Learning. Information Science and Statistics. New York: Springer.

Blake, Andrew, Pushmeet Kohli, and Carsten Rother, eds. 2011. Markov Random Fields for Vision and Image Processing. Cambridge, Mass: MIT Press. https://mitpress.mit.edu/books/markov-random-fields-vision-and-image-processing.

Bloniarz, Adam, Hanzhong Liu, Cun-Hui Zhang, Jasjeet Sekhon, and Bin Yu. 2015. “Lasso Adjustments of Treatment Effect Estimates in Randomized Experiments,” July. http://arxiv.org/abs/1507.03652.

Boyd, Stephen. 2010. “Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers.” Foundations and Trends® in Machine Learning 3 (1): 1–122. https://doi.org/10.1561/2200000016.

Brodersen, Kay H., Fabian Gallusser, Jim Koehler, Nicolas Remy, and Steven L. Scott. 2015. “Inferring Causal Impact Using Bayesian Structural Time-Series Models.” The Annals of Applied Statistics 9 (1): 247–74. https://doi.org/10.1214/14-AOAS788.

Bu, Yunqi, and Johannes Lederer. 2017. “Integrating Additional Knowledge into Estimation of Graphical Models,” April. http://arxiv.org/abs/1704.02739.

Bühlmann, Peter, Markus Kalisch, and Lukas Meier. 2014. “High-Dimensional Statistics with a View Toward Applications in Biology.” Annual Review of Statistics and Its Application 1 (1): 255–78. https://doi.org/10.1146/annurev-statistics-022513-115545.

Bühlmann, Peter, Philipp Rütimann, and Markus Kalisch. 2013. “Controlling False Positive Selections in High-Dimensional Regression and Causal Inference.” Statistical Methods in Medical Research 22 (5): 466–92. http://smm.sagepub.com/content/22/5/466.short.

Celeux, Gilles, Florence Forbes, and Nathalie Peyrard. 2003. “EM Procedures Using Mean Field-Like Approximations for Markov Model-Based Image Segmentation.” Pattern Recognition 36 (1): 131–44. https://doi.org/10.1016/S0031-3203(02)00027-4.

Cevher, Volkan, Marco F. Duarte, Chinmay Hegde, and Richard Baraniuk. 2009. “Sparse Signal Recovery Using Markov Random Fields.” In Advances in Neural Information Processing Systems, 257–64. Curran Associates, Inc. http://papers.nips.cc/paper/3487-sparse-signal-recovery-using-markov-random-fields.

Christakis, Nicholas A., and James H. Fowler. 2007. “The Spread of Obesity in a Large Social Network over 32 Years.” New England Journal of Medicine 357 (4): 370–79. https://doi.org/10.1056/NEJMsa066082.

Clifford, P. 1990. “Markov Random Fields in Statistics.” In Disorder in Physical Systems: A Volume in Honour of John Hammersley, edited by G. R. Grimmett and D. J. A. Welsh. Oxford England : New York: Oxford University Press.

Crisan, Dan, and Joaquín Míguez. 2014. “Particle-Kernel Estimation of the Filter Density in State-Space Models.” Bernoulli 20 (4): 1879–1929. https://doi.org/10.3150/13-BEJ545.

Dawid, A. P. 2001. “Separoids: A Mathematical Framework for Conditional Independence and Irrelevance.” Annals of Mathematics and Artificial Intelligence 32 (1-4): 335–72. https://doi.org/10.1023/A:1016734104787.

Dawid, A. Philip. 1979. “Conditional Independence in Statistical Theory.” Journal of the Royal Statistical Society. Series B (Methodological) 41 (1): 1–31. http://people.csail.mit.edu/tdanford/discovering-causal-graphs-papers/dawid-79.pdf.

———. 1980. “Conditional Independence for Statistical Operations.” The Annals of Statistics 8 (3): 598–617. https://doi.org/10.1214/aos/1176345011.

De Luna, Xavier, Ingeborg Waernbaum, and Thomas S. Richardson. 2011. “Covariate Selection for the Nonparametric Estimation of an Average Treatment Effect.” Biometrika, October, asr041. https://doi.org/10.1093/biomet/asr041.

Edwards, David, and Smitha Ankinakatte. 2015. “Context-Specific Graphical Models for Discrete Longitudinal Data.” Statistical Modelling 15 (4): 301–25. https://doi.org/10.1177/1471082X14551248.

Fixx, James F. 1977. Games for the Superintelligent. London: Muller.

Forbes, F., and N. Peyrard. 2003. “Hidden Markov Random Field Model Selection Criteria Based on Mean Field-Like Approximations.” IEEE Transactions on Pattern Analysis and Machine Intelligence 25 (9): 1089–1101. https://doi.org/10.1109/TPAMI.2003.1227985.

Frey, B.J., and Nebojsa Jojic. 2005. “A Comparison of Algorithms for Inference and Learning in Probabilistic Graphical Models.” IEEE Transactions on Pattern Analysis and Machine Intelligence 27 (9): 1392–1416. https://doi.org/10.1109/TPAMI.2005.169.

Frey, Brendan J. 2003. “Extending Factor Graphs so as to Unify Directed and Undirected Graphical Models.” In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, 257–64. UAI’03. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. http://arxiv.org/abs/1212.2486.

Fridman, Arthur. 2003. “Mixed Markov Models.” Proceedings of the National Academy of Sciences 100 (14): 8092–6. https://doi.org/10.1073/pnas.0731829100.

Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2008. “Sparse Inverse Covariance Estimation with the Graphical Lasso.” Biostatistics 9 (3): 432–41. https://doi.org/10.1093/biostatistics/kxm045.

Friel, Nial, and Håvard Rue. 2007. “Recursive Computing and Simulation-Free Inference for General Factorizable Models.” Biometrika 94 (3): 661–72. https://doi.org/10.1093/biomet/asm052.

Geyer, Charles J. 1991. “Markov Chain Monte Carlo Maximum Likelihood.” http://conservancy.umn.edu.sci-hub.org/handle/11299/58440.

Geyer, Charles J., and Jesper Møller. 1994. “Simulation Procedures and Likelihood Inference for Spatial Point Processes.” Scandinavian Journal of Statistics, 359–73.

Goldberg, David A. 2013. “Higher Order Markov Random Fields for Independent Sets,” January. http://arxiv.org/abs/1301.1762.

Grenander, Ulf. 1989. “Advances in Pattern Theory.” The Annals of Statistics 17 (1): 1–30. https://doi.org/10.1214/aos/1176347002.

Griffeath, David. 1976. “Introduction to Random Fields.” In Denumerable Markov Chains, 425–58. Graduate Texts in Mathematics 40. Springer New York. http://link.springer.com/chapter/10.1007/978-1-4684-9455-6_12.

Gu, Jiaying, Fei Fu, and Qing Zhou. 2014. “Adaptive Penalized Estimation of Directed Acyclic Graphs from Categorical Data,” March. http://arxiv.org/abs/1403.2310.

Häggström, Olle, Marie-Colette N. M. van Lieshout, and Jesper Møller. 1999. “Characterization Results and Markov Chain Monte Carlo Algorithms Including Exact Simulation for Some Spatial Point Processes.” Bernoulli 5 (4): 641–58. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.53.1330.

Heckerman, David, David Maxwell Chickering, Christopher Meek, Robert Rounthwaite, and Carl Kadie. 2000. “Dependency Networks for Inference, Collaborative Filtering, and Data Visualization.” Journal of Machine Learning Research 1 (Oct): 49–75. http://www.jmlr.org/papers/v1/heckerman00a.html.

Jensen, Jens Ledet, and Jesper Møller. 1991. “Pseudolikelihood for Exponential Family Models of Spatial Point Processes.” The Annals of Applied Probability 1 (3): 445–61. https://doi.org/10.1214/aoap/1177005877.

Jordan, Michael I. 2004. “Graphical Models.” Statistical Science 19 (1): 140–55.

Jordan, Michael I., Zoubin Ghahramani, Tommi S. Jaakkola, and Lawrence K. Saul. 1999. “An Introduction to Variational Methods for Graphical Models.” Machine Learning 37 (2): 183–233. https://doi.org/10.1023/A:1007665907178.

Jordan, Michael Irwin. 1999. Learning in Graphical Models. Cambridge, Mass.: MIT Press.

Jordan, Michael I., and Yair Weiss. 2002a. “Graphical Models: Probabilistic Inference.” The Handbook of Brain Theory and Neural Networks, 490–96. http://www.cs.iastate.edu/~honavar/jordan2.pdf.

———. 2002b. “Probabilistic Inference in Graphical Models.” Handbook of Neural Networks and Brain Theory. http://mlg.eng.cam.ac.uk/zoubin/course03/hbtnn2e-I.pdf.

Kalisch, Markus, and Peter Bühlmann. 2007. “Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm.” Journal of Machine Learning Research 8 (May): 613–36. http://jmlr.org/papers/v8/kalisch07a.html.

Kindermann, Ross P., and J. Laurie Snell. 1980. “On the Relation Between Markov Random Fields and Social Networks.” The Journal of Mathematical Sociology 7 (1): 1–13. https://doi.org/10.1080/0022250X.1980.9989895.

Kindermann, Ross, and J. Laurie Snell. 1980. Markov Random Fields and Their Applications. Vol. 1. Contemporary Mathematics. Providence, Rhode Island: American Mathematical Society. http://www.ams.org/conm/001/.

Koller, Daphne, and Nir Friedman. 2009. Probabilistic Graphical Models : Principles and Techniques. Cambridge, MA: MIT Press.

Krause, Andreas, and Carlos Guestrin. 2009. “Optimal Value of Information in Graphical Models.” J. Artif. Int. Res. 35 (1): 557–91.

Krämer, Nicole, Juliane Schäfer, and Anne-Laure Boulesteix. 2009. “Regularized Estimation of Large-Scale Gene Association Networks Using Graphical Gaussian Models.” BMC Bioinformatics 10 (1): 384. https://doi.org/10.1186/1471-2105-10-384.

Kschischang, F.R., B.J. Frey, and H.-A. Loeliger. 2001. “Factor Graphs and the Sum-Product Algorithm.” IEEE Transactions on Information Theory 47 (2): 498–519. https://doi.org/10.1109/18.910572.

Lauritzen, S. L., and D. J. Spiegelhalter. 1988. “Local Computations with Probabilities on Graphical Structures and Their Application to Expert Systems.” Journal of the Royal Statistical Society. Series B (Methodological) 50 (2): 157–224. http://intersci.ss.uci.edu/wiki/pdf/Lauritzen1988.pdf.

Lauritzen, Steffen L. 1996. Graphical Models. Clarendon Press.

Lavrenko, Victor, and Jeremy Pickens. 2003a. “Music Modeling with Random Fields.” In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, 389. ACM Press. https://doi.org/10.1145/860435.860515.

———. 2003b. “Polyphonic Music Modeling with Random Fields.” In Proceedings of the Eleventh ACM International Conference on Multimedia, 120. ACM Press. https://doi.org/10.1145/957013.957041.

Lederer, Johannes. 2016. “Graphical Models for Discrete and Continuous Data,” September. http://arxiv.org/abs/1609.05551.

Liu, Han, Fang Han, Ming Yuan, John Lafferty, and Larry Wasserman. 2012a. “The Nonparanormal SKEPTIC,” June. http://arxiv.org/abs/1206.6488.

———. 2012b. “High-Dimensional Semiparametric Gaussian Copula Graphical Models.” The Annals of Statistics 40 (4): 2293–2326. https://doi.org/10.1214/12-AOS1037.

Liu, Han, Kathryn Roeder, and Larry Wasserman. 2010. “Stability Approach to Regularization Selection (StARS) for High Dimensional Graphical Models.” In Advances in Neural Information Processing Systems 23, edited by J. D. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R. S. Zemel, and A. Culotta, 1432–40. Curran Associates, Inc. http://papers.nips.cc/paper/3966-stability-approach-to-regularization-selection-stars-for-high-dimensional-graphical-models.pdf.

Loeliger, H.-A. 2004. “An Introduction to Factor Graphs.” IEEE Signal Processing Magazine 21 (1): 28–41. https://doi.org/10.1109/MSP.2004.1267047.

Maathuis, Marloes H., and Diego Colombo. 2013. “A Generalized Backdoor Criterion.” arXiv Preprint arXiv:1307.5636. http://arxiv.org/abs/1307.5636.

Maddage, Namunu C., Haizhou Li, and Mohan S. Kankanhalli. 2006. “Music Structure Based Vector Space Retrieval.” In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, 67. ACM Press. https://doi.org/10.1145/1148170.1148185.

Malioutov, Dmitry M., Jason K. Johnson, and Alan S. Willsky. 2006. “Walk-Sums and Belief Propagation in Gaussian Graphical Models.” Journal of Machine Learning Research 7 (October): 2031–64. http://jmlr.csail.mit.edu/papers/v7/malioutov06a.html.

Mao, Yongyi, Frank R. Kschischang, and Brendan J. Frey. 2004. “Convolutional Factor Graphs as Probabilistic Models.” In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, 374–81. UAI ’04. Arlington, Virginia, United States: AUAI Press. http://arxiv.org/abs/1207.4136.

Marbach, Daniel, Robert J. Prill, Thomas Schaffter, Claudio Mattiussi, Dario Floreano, and Gustavo Stolovitzky. 2010. “Revealing Strengths and Weaknesses of Methods for Gene Network Inference.” Proceedings of the National Academy of Sciences 107 (14): 6286–91. https://doi.org/10.1073/pnas.0913357107.

McCallum, Andrew. 2012. “Efficiently Inducing Features of Conditional Random Fields,” October. http://arxiv.org/abs/1212.2504.

Meinshausen, Nicolai, and Peter Bühlmann. 2006. “High-Dimensional Graphs and Variable Selection with the Lasso.” The Annals of Statistics 34 (3): 1436–62. https://doi.org/10.1214/009053606000000281.

Mihalkova, Lilyana, and Raymond J. Mooney. 2007. “Bottom-up Learning of Markov Logic Network Structure.” In Proceedings of the 24th International Conference on Machine Learning, 625–32. ACM. http://dl.acm.org/citation.cfm?id=1273575.

Mohan, Karthika, and Judea Pearl. 2018. “Consistent Estimation Given Missing Data.” In International Conference on Probabilistic Graphical Models, 284–95. http://proceedings.mlr.press/v72/mohan18a.html.

Montanari, Andrea. 2011. “Lecture Notes for Stat 375 Inference in Graphical Models.” http://www.stanford.edu/~montanar/TEACHING/Stat375/handouts/notes_stat375_1.pdf.

Morgan, Jonathan Scott, Iman Barjasteh, Cliff Lampe, and Hayder Radha. 2014. “The Entropy of Attention and Popularity in Youtube Videos,” December. http://arxiv.org/abs/1412.1185.

Murphy, Kevin P. 2012. Machine Learning: A Probabilistic Perspective. 1 edition. Cambridge, MA: The MIT Press.

Osokin, A., D. Vetrov, and V. Kolmogorov. 2011. “Submodular Decomposition Framework for Inference in Associative Markov Networks with Global Constraints.” In 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 1889–96. https://doi.org/10.1109/CVPR.2011.5995361.

Pearl, Judea. 1982. “Reverend Bayes on Inference Engines: A Distributed Hierarchical Approach.” In In Proceedings of the National Conference on Artificial Intelligence, 133–36. http://www.aaai.org/Papers/AAAI/1982/AAAI82-032.pdf.

———. 1986. “Fusion, Propagation, and Structuring in Belief Networks.” Artificial Intelligence 29 (3): 241–88. https://doi.org/10.1016/0004-3702(86)90072-X.

———. 2008. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Rev. 2. print., 12. [Dr.]. The Morgan Kaufmann Series in Representation and Reasoning. San Francisco, Calif: Kaufmann.

———. 2009. Causality: Models, Reasoning and Inference. Cambridge University Press.

Pereda, E, R Q Quiroga, and J Bhattacharya. 2005. “Nonlinear Multivariate Analysis of Neurophysiological Signals.” Progress in Neurobiology 77 (1-2): 1–37.

Pickens, Jeremy, and Costas S. Iliopoulos. 2005. “Markov Random Fields and Maximum Entropy Modeling for Music Information Retrieval.” In ISMIR, 207–14. Citeseer. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.60.80&rep=rep1&type=pdf.

Pollard, Dave. 2004. “Hammersley-Clifford Theorem for Markov Random Fields.”

Rabbat, Michael G., MÁrio A. T. Figueiredo, and Robert D. Nowak. 2008. “Network Inference from Co-Occurrences.” IEEE Transactions on Information Theory 54 (9): 4053–68. https://doi.org/10.1109/TIT.2008.926315.

Ranzato, M. 2013. “Modeling Natural Images Using Gated MRFs.” IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (9): 2206–22. https://doi.org/10.1109/TPAMI.2013.29.

Ravikumar, Pradeep, Martin J. Wainwright, and John D. Lafferty. 2010. “High-Dimensional Ising Model Selection Using ℓ1-Regularized Logistic Regression.” The Annals of Statistics 38 (3): 1287–1319. https://doi.org/10.1214/09-AOS691.

Reeves, R., and A. N. Pettitt. 2004. “Efficient Recursions for General Factorisable Models.” Biometrika 91 (3): 751–57. https://doi.org/10.1093/biomet/91.3.751.

Richardson, Matthew, and Pedro Domingos. 2006. “Markov Logic Networks.” Machine Learning 62 (1-2): 107–36. http://link.springer.com/article/10.1007/s10994-006-5833-1.

Ripley, B. D., and F. P. Kelly. 1977. “Markov Point Processes.” Journal of the London Mathematical Society s2-15 (1): 188–92. https://doi.org/10.1112/jlms/s2-15.1.188.

Schmidt, Mark W., and Kevin P. Murphy. 2010. “Convex Structure Learning in Log-Linear Models: Beyond Pairwise Potentials.” In International Conference on Artificial Intelligence and Statistics, 709–16. http://machinelearning.wustl.edu/mlpapers/paper_files/AISTATS2010_SchmidtM10.pdf.

Shachter, Ross D. 1998. “Bayes-Ball: Rational Pastime (for Determining Irrelevance and Requisite Information in Belief Networks and Influence Diagrams).” In Proceedings of the Fourteenth Conference on Uncertainty in Artificial Intelligence, 480–87. UAI’98. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc. https://arxiv.org/abs/1301.7412.

Shalizi, Cosma Rohilla, and Edward McFowland III. 2016. “Controlling for Latent Homophily in Social Networks Through Inferring Latent Locations,” July. http://arxiv.org/abs/1607.06565.

Smith, David A., and Jason Eisner. 2008. “Dependency Parsing by Belief Propagation.” In Proceedings of the Conference on Empirical Methods in Natural Language Processing, 145–56. Association for Computational Linguistics. http://dl.acm.org/citation.cfm?id=1613737.

Spirtes, Peter, Clark Glymour, and Richard Scheines. 2001. Causation, Prediction, and Search. Second Edition. Adaptive Computation and Machine Learning. The MIT Press. https://www.cs.cmu.edu/afs/cs.cmu.edu/project/learn-43/lib/photoz/.g/scottd/fullbook.pdf.

Studený, Milan. 1997. “A Recovery Algorithm for Chain Graphs.” International Journal of Approximate Reasoning, Uncertainty in AI (UAI’96) Conference, 17 (2–3): 265–93. https://doi.org/10.1016/S0888-613X(97)00018-2.

———. 2005. Probabilistic Conditional Independence Structures. Information Science and Statistics. London: Springer.

Studený, Milan, and Jiřina Vejnarová. 1998. “On Multiinformation Function as a Tool for Measuring Stochastic Dependence.” In Learning in Graphical Models, 261–97. Cambridge, Mass.: MIT Press.

Su, Ri-Qi, Wen-Xu Wang, and Ying-Cheng Lai. 2012. “Detecting Hidden Nodes in Complex Networks from Time Series.” Phys. Rev. E 85 (6): 065201. https://doi.org/10.1103/PhysRevE.85.065201.

Sutton, Charles, and Andrew McCallum. 2010. “An Introduction to Conditional Random Fields,” November. http://arxiv.org/abs/1011.4088.

Tansey, Wesley, Oscar Hernan Madrid Padilla, Arun Sai Suggala, and Pradeep Ravikumar. 2015. “Vector-Space Markov Random Fields via Exponential Families.” In Journal of Machine Learning Research, 684–92. http://jmlr.csail.mit.edu/proceedings/papers/v37/tansey15.html.

Vetrov, Dmitry, and Anton Osokin. 2011. “Graph Preserving Label Decomposition in Discrete MRFs with Selfish Potentials.” In NIPS Workshop on Discrete Optimization in Machine Learning (DISCML NIPS). http://machinelearning.ru/wiki/images/e/e9/Dissml2011_GPLD.pdf.

Visweswaran, Shyam, and Gregory F. Cooper. 2014. “Counting Markov Blanket Structures,” July. http://arxiv.org/abs/1407.2483.

Wainwright, Martin J., and Michael I. Jordan. 2008. Graphical Models, Exponential Families, and Variational Inference. Vol. 1. Foundations and Trends® in Machine Learning. http://www.cs.berkeley.edu/~jordan/papers/wainwright-jordan-fnt.pdf.

Wainwright, M., and M. Jordan. 2005. “A Variational Principle for Graphical Models.” In New Directions in Statistical Signal Processing. Vol. 155. MIT Press. http://metro-natshar-31-71.brain.net.pk/articles/new-directions-in-statistical-signal-processing-from-systems-to-brains-neural-information-processing.9780262083485.28286.pdf#page=166.

Wang, Chaohui, Nikos Komodakis, and Nikos Paragios. 2013. “Markov Random Field Modeling, Inference & Learning in Computer Vision & Image Understanding: A Survey.” Computer Vision and Image Understanding 117 (11): 1610–27. https://doi.org/10.1016/j.cviu.2013.07.004.

Wasserman, Larry, Mladen Kolar, and Alessandro Rinaldo. 2013. “Estimating Undirected Graphs Under Weak Assumptions,” September. http://arxiv.org/abs/1309.6933.

Weiss, Yair. 2000. “Correctness of Local Probability Propagation in Graphical Models with Loops.” Neural Computation 12 (1): 1–41. https://doi.org/10.1162/089976600300015880.

Weiss, Yair, and William T. Freeman. 2001. “Correctness of Belief Propagation in Gaussian Graphical Models of Arbitrary Topology.” Neural Computation 13 (10): 2173–2200. https://doi.org/10.1162/089976601750541769.

Winn, John M., and Christopher M. Bishop. 2005. “Variational Message Passing.” In Journal of Machine Learning Research, 661–94. http://johnwinn.org/Publications/papers/VMP2005.pdf.

Wright, Sewall. 1934. “The Method of Path Coefficients.” The Annals of Mathematical Statistics 5 (3): 161–215. https://doi.org/10.1214/aoms/1177732676.

Wu, Rui, R. Srikant, and Jian Ni. 2013. “Learning Loosely Connected Markov Random Fields.” Stochastic Systems 3 (2): 362–404. https://doi.org/10.1214/12-SSY073.

Yedidia, Jonathan S., W.T. Freeman, and Y. Weiss. 2005. “Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms.” IEEE Transactions on Information Theory 51 (7): 2282–2312. https://doi.org/10.1109/TIT.2005.850085.

Yedidia, J.S., W.T. Freeman, and Y. Weiss. 2003. “Understanding Belief Propagation and Its Generalizations.” In Exploring Artificial Intelligence in the New Millennium, edited by G. Lakemeyer and B. Nebel, 239–36. Morgan Kaufmann Publishers. http://www.merl.com/publications/TR2001-22.

Zhang, Kun, Jonas Peters, Dominik Janzing, and Bernhard Schölkopf. 2012. “Kernel-Based Conditional Independence Test and Application in Causal Discovery,” February. http://arxiv.org/abs/1202.3775.

Zhou, Mingyuan, Yulai Cong, and Bo Chen. 2017. “Augmentable Gamma Belief Networks,” 44.