skip to main content
research-article
Artifacts Evaluated & Functional

Incremental inference for probabilistic programs

Published:11 June 2018Publication History
Skip Abstract Section

Abstract

We present a novel approach for approximate sampling in probabilistic programs based on incremental inference. The key idea is to adapt the samples for a program P into samples for a program Q, thereby avoiding the expensive sampling computation for program Q. To enable incremental inference in probabilistic programming, our work: (i) introduces the concept of a trace translator which adapts samples from P into samples of Q, (ii) phrases this translation approach in the context of sequential Monte Carlo (SMC), which gives theoretical guarantees that the adapted samples converge to the distribution induced by Q, and (iii) shows how to obtain a concrete trace translator by establishing a correspondence between the random choices of the two probabilistic programs. We implemented our approach in two different probabilistic programming systems and showed that, compared to methods that sample the program Q from scratch, incremental inference can lead to orders of magnitude increase in efficiency, depending on how closely related P and Q are.

Skip Supplemental Material Section

Supplemental Material

p571-cusumano-towner.webm

References

  1. Umut A Acar. 2009. Self-adjusting computation (an overview). In Proceedings of the 2009 ACM SIGPLAN workshop on Partial evaluation and program manipulation. ACM, 1–6. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Umut A Acar, Alexander T Ihler, Ramgopal Mettu, and Özgür Sümer. 2012. Adaptive inference on general graphical models. arXiv preprint arXiv:1206.3234 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Umut A Acar, Alexander T Ihler, Ramgopal R Mettu, and Özgür Sümer. 2007. Adaptive Bayesian inference. In Proceedings of the 20th International Conference on Neural Information Processing Systems. Curran Associates Inc., 1441–1448. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Hamza Agli, Philippe Bonnard, Christophe Gonzales, and Pierre-Henri Wuillemin. 2016. Incremental junction tree inference. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems. Springer, 326–337.Google ScholarGoogle ScholarCross RefCross Ref
  5. James O Berger, Elías Moreno, Luis Raul Pericchi, M Jesús Bayarri, José M Bernardo, Juan A Cano, Julián De la Horra, Jacinto Martín, David Ríos-Insúa, Bruno Betrò, et al. 1994. An overview of robust Bayesian analysis. Test 3, 1 (1994), 5–124.Google ScholarGoogle ScholarCross RefCross Ref
  6. Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: A probabilistic programming language. Journal of Statistical Software 76, 1 (2017).Google ScholarGoogle ScholarCross RefCross Ref
  7. Arun Chaganty, Aditya Nori, and Sriram Rajamani. 2013. Efficiently sampling probabilistic programs via program analysis. In Artificial Intelligence and Statistics. 153–160.Google ScholarGoogle Scholar
  8. Sourav Chatterjee and Persi Diaconis. 2015. The sample size required in importance sampling. arXiv preprint arXiv:1511.01437 (2015).Google ScholarGoogle Scholar
  9. Nicolas Chopin. 2002. A sequential particle filter method for static models. Biometrika 89, 3 (2002), 539–552.Google ScholarGoogle ScholarCross RefCross Ref
  10. Gregory F Cooper. 1990. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial intelligence 42, 2-3 (1990), 393–405. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Marco F Cusumano-Towner, Alexey Radul, David Wingate, and Vikash K Mansinghka. 2017. Probabilistic programs for inferring the goals of autonomous agents. arXiv preprint arXiv:1704.04977 (2017).Google ScholarGoogle Scholar
  12. Paul Dagum and Michael Luby. 1993. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial intelligence 60, 1 (1993), 141–153. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. 2006. Sequential monte carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68, 3 (2006), 411–436.Google ScholarGoogle ScholarCross RefCross Ref
  14. M Julia Flores, José A Gámez, and Kristian G Olesen. 2002. Incremental compilation of Bayesian networks. In Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc., 233–240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. M Julia Flores, Jose A Gámez, and Kristian G Olesen. 2011. Incremental compilation of bayesian networks based on maximal prime subgraphs. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, 02 (2011), 155–191.Google ScholarGoogle ScholarCross RefCross Ref
  16. Nate Foster, Dexter Kozen, Konstantinos Mamouras, Mark Reitblatt, and Alexandra Silva. 2016. Probabilistic netkat. In European Symposium on Programming Languages and Systems. Springer, 282–309.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Timon Gehr, Sasa Misailovic, and Martin Vechev. 2016. Psi: Exact symbolic inference for probabilistic programs. In International Conference on Computer Aided Verification. Springer, 62–83.Google ScholarGoogle ScholarCross RefCross Ref
  18. Noah Goodman, Vikash Mansinghka, Daniel M Roy, Keith Bonawitz, and Joshua B Tenenbaum. 2012. Church: a language for generative models. arXiv preprint arXiv:1206.3255 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Noah D Goodman and Andreas Stuhlmüller. 2014. The Design and Implementation of Probabilistic Programming Languages. http://dippl. org . Accessed: 2017-8-26.Google ScholarGoogle Scholar
  20. Roger Grosse, Ruslan R Salakhutdinov, William T Freeman, and Joshua B Tenenbaum. 2012. Exploiting compositionality to explore a large space of model structures. arXiv preprint arXiv:1210.4856 (2012). Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jonathan H Huggins and Daniel M Roy. 2015. Convergence of sequential Monte Carlo-based sampling methods. arXiv preprint arXiv:1503.00966 (2015).Google ScholarGoogle Scholar
  22. Jin H Kim and Judea Pearl. 1983. A computational model for causal and diagnostic reasoning in inference systems.. In IJCAI, Vol. 83. 190–193. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Scott Kirkpatrick, C Daniel Gelatt Jr, and Mario P Vecchi. 1987. Optimization by simulated annealing. In Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications. World Scientific, 339–348.Google ScholarGoogle Scholar
  24. Oleg Kiselyov. 2016. Probabilistic programming language and its incremental evaluation. In Asian Symposium on Programming Languages and Systems. Springer, 357–376.Google ScholarGoogle ScholarCross RefCross Ref
  25. Martin Kučera, Petar Tsankov, Timon Gehr, Marco Guarnieri, and Martin Vechev. 2017. Synthesis of Probabilistic Privacy Enforcement. Analysis 1 (2017), 1.Google ScholarGoogle Scholar
  26. Tuan Anh Le, Atilim Gunes Baydin, and Frank Wood. 2016. Inference compilation and universal probabilistic programming. arXiv preprint arXiv:1610.09900 (2016).Google ScholarGoogle Scholar
  27. Wei Li, Peter Van Beek, and Pascal Poupart. 2006. Performing incremental Bayesian inference by dynamic model counting. In Proceedings of the National Conference on Artificial Intelligence, Vol. 21. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 1173. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jun S Liu and Rong Chen. 1995. Blind deconvolution via sequential imputations. J. Amer. Statist. Assoc. 90, 430 (1995), 567–576.Google ScholarGoogle ScholarCross RefCross Ref
  29. Vikash Mansinghka, Daniel Selsam, and Yura Perov. 2014. Venture: a higher-order probabilistic programming platform with programmable inference. arXiv preprint arXiv:1404.0099 (2014).Google ScholarGoogle Scholar
  30. Kevin P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. The MIT Press. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Lawrence M Murray. 2013. Bayesian state-space modelling on highperformance hardware using LibBi. arXiv preprint arXiv:1306.3277 (2013).Google ScholarGoogle Scholar
  32. Lawrence M Murray, Daniel Lundén, Jan Kudlicka, David Broman, and Thomas B Schön. 2017. Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs. arXiv preprint arXiv:1708.07787 (2017).Google ScholarGoogle Scholar
  33. Praveen Narayanan, Jacques Carette, Wren Romano, Chung-chieh Shan, and Robert Zinkov. 2016. Probabilistic inference by program transformation in Hakaru (system description). In International Symposium on Functional and Logic Programming. Springer, 62–79.Google ScholarGoogle ScholarCross RefCross Ref
  34. Radford M Neal. 2001. Annealed importance sampling. Statistics and computing 11, 2 (2001), 125–139. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Aditya V Nori, Chung-Kil Hur, Sriram K Rajamani, and Selva Samuel. 2014. R2: An Efficient MCMC Sampler for Probabilistic Programs.Google ScholarGoogle Scholar
  36. Aditya V Nori, Sherjil Ozair, Sriram K Rajamani, and Deepak Vijaykeerthy. 2015. Efficient synthesis of probabilistic programs. In ACM SIGPLAN Notices, Vol. 50. ACM, 208–217. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Brooks Paige and Frank Wood. 2014. A compilation target for probabilistic programming languages. arXiv preprint arXiv:1403.0504 (2014). Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Nimrod Partush and Eran Yahav. 2014. Abstract semantic differencing via speculative correlation. ACM SIGPLAN Notices 49, 10 (2014), 811– 828. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Daniel Ritchie, Andreas Stuhlmüller, and Noah Goodman. 2016. C3: Lightweight incrementalized MCMC for probabilistic programs using continuations and callsite caching. In Artificial Intelligence and Statistics. 28–37.Google ScholarGoogle Scholar
  40. Adrian FM Smith and Alan E Gelfand. 1992. Bayesian statistics without tears: a sampling–resampling perspective. The American Statistician 46, 2 (1992), 84–88.Google ScholarGoogle Scholar
  41. Andreas Stuhlmüller, Robert XD Hawkins, N Siddharth, and Noah D Goodman. 2015. Coarse-to-fine sequential monte carlo for probabilistic programs. arXiv preprint arXiv:1509.02962 (2015).Google ScholarGoogle Scholar
  42. Sebastian Thrun. 2002. Probabilistic robotics. Commun. ACM 45, 3 (2002), 52–57. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. John E. Wennberg, Elliott S. Fisher, David C. Goodman, and Jonathan S. Skinner. 2008. Tracking the Care of Patients with Severe Chronic Illness - The Dartmouth Atlas of Health Care 2008.Google ScholarGoogle Scholar
  44. David Wingate, Andreas Stuhlmueller, and Noah Goodman. 2011. Lightweight implementations of probabilistic programming languages via transformational compilation. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 770–778.Google ScholarGoogle Scholar
  45. Frank Wood, Jan Willem Meent, and Vikash Mansinghka. 2014. A new approach to probabilistic programming inference. In Artificial Intelligence and Statistics. 1024–1032.Google ScholarGoogle Scholar
  46. Yi Wu, Lei Li, Stuart Russell, and Rastislav Bodik. 2016. Swift: Compiled inference for probabilistic programming languages. arXiv preprint arXiv:1606.09242 (2016). Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Lingfeng Yang, Patrick Hanrahan, and Noah Goodman. 2014. Generating efficient MCMC kernels from probabilistic programs. In Artificial Intelligence and Statistics. 1068–1076.Google ScholarGoogle Scholar
  48. Jieyuan Zhang, Yulei Sui, and Jingling Xue. 2017. Incremental Analysis for Probabilistic Programs. In International Static Analysis Symposium. Springer, 450–472.Google ScholarGoogle Scholar

Index Terms

  1. Incremental inference for probabilistic programs

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in

      Full Access

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader
      About Cookies On This Site

      We use cookies to ensure that we give you the best experience on our website.

      Learn more

      Got it!