Abstract
We present a novel approach for approximate sampling in probabilistic programs based on incremental inference. The key idea is to adapt the samples for a program P into samples for a program Q, thereby avoiding the expensive sampling computation for program Q. To enable incremental inference in probabilistic programming, our work: (i) introduces the concept of a trace translator which adapts samples from P into samples of Q, (ii) phrases this translation approach in the context of sequential Monte Carlo (SMC), which gives theoretical guarantees that the adapted samples converge to the distribution induced by Q, and (iii) shows how to obtain a concrete trace translator by establishing a correspondence between the random choices of the two probabilistic programs. We implemented our approach in two different probabilistic programming systems and showed that, compared to methods that sample the program Q from scratch, incremental inference can lead to orders of magnitude increase in efficiency, depending on how closely related P and Q are.
Supplemental Material
- Umut A Acar. 2009. Self-adjusting computation (an overview). In Proceedings of the 2009 ACM SIGPLAN workshop on Partial evaluation and program manipulation. ACM, 1–6. Google Scholar
Digital Library
- Umut A Acar, Alexander T Ihler, Ramgopal Mettu, and Özgür Sümer. 2012. Adaptive inference on general graphical models. arXiv preprint arXiv:1206.3234 (2012). Google Scholar
Digital Library
- Umut A Acar, Alexander T Ihler, Ramgopal R Mettu, and Özgür Sümer. 2007. Adaptive Bayesian inference. In Proceedings of the 20th International Conference on Neural Information Processing Systems. Curran Associates Inc., 1441–1448. Google Scholar
Digital Library
- Hamza Agli, Philippe Bonnard, Christophe Gonzales, and Pierre-Henri Wuillemin. 2016. Incremental junction tree inference. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems. Springer, 326–337.Google Scholar
Cross Ref
- James O Berger, Elías Moreno, Luis Raul Pericchi, M Jesús Bayarri, José M Bernardo, Juan A Cano, Julián De la Horra, Jacinto Martín, David Ríos-Insúa, Bruno Betrò, et al. 1994. An overview of robust Bayesian analysis. Test 3, 1 (1994), 5–124.Google Scholar
Cross Ref
- Bob Carpenter, Andrew Gelman, Matthew D Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: A probabilistic programming language. Journal of Statistical Software 76, 1 (2017).Google Scholar
Cross Ref
- Arun Chaganty, Aditya Nori, and Sriram Rajamani. 2013. Efficiently sampling probabilistic programs via program analysis. In Artificial Intelligence and Statistics. 153–160.Google Scholar
- Sourav Chatterjee and Persi Diaconis. 2015. The sample size required in importance sampling. arXiv preprint arXiv:1511.01437 (2015).Google Scholar
- Nicolas Chopin. 2002. A sequential particle filter method for static models. Biometrika 89, 3 (2002), 539–552.Google Scholar
Cross Ref
- Gregory F Cooper. 1990. The computational complexity of probabilistic inference using Bayesian belief networks. Artificial intelligence 42, 2-3 (1990), 393–405. Google Scholar
Digital Library
- Marco F Cusumano-Towner, Alexey Radul, David Wingate, and Vikash K Mansinghka. 2017. Probabilistic programs for inferring the goals of autonomous agents. arXiv preprint arXiv:1704.04977 (2017).Google Scholar
- Paul Dagum and Michael Luby. 1993. Approximating probabilistic inference in Bayesian belief networks is NP-hard. Artificial intelligence 60, 1 (1993), 141–153. Google Scholar
Digital Library
- Pierre Del Moral, Arnaud Doucet, and Ajay Jasra. 2006. Sequential monte carlo samplers. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 68, 3 (2006), 411–436.Google Scholar
Cross Ref
- M Julia Flores, José A Gámez, and Kristian G Olesen. 2002. Incremental compilation of Bayesian networks. In Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence. Morgan Kaufmann Publishers Inc., 233–240. Google Scholar
Digital Library
- M Julia Flores, Jose A Gámez, and Kristian G Olesen. 2011. Incremental compilation of bayesian networks based on maximal prime subgraphs. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems 19, 02 (2011), 155–191.Google Scholar
Cross Ref
- Nate Foster, Dexter Kozen, Konstantinos Mamouras, Mark Reitblatt, and Alexandra Silva. 2016. Probabilistic netkat. In European Symposium on Programming Languages and Systems. Springer, 282–309.Google Scholar
Digital Library
- Timon Gehr, Sasa Misailovic, and Martin Vechev. 2016. Psi: Exact symbolic inference for probabilistic programs. In International Conference on Computer Aided Verification. Springer, 62–83.Google Scholar
Cross Ref
- Noah Goodman, Vikash Mansinghka, Daniel M Roy, Keith Bonawitz, and Joshua B Tenenbaum. 2012. Church: a language for generative models. arXiv preprint arXiv:1206.3255 (2012). Google Scholar
Digital Library
- Noah D Goodman and Andreas Stuhlmüller. 2014. The Design and Implementation of Probabilistic Programming Languages. http://dippl. org . Accessed: 2017-8-26.Google Scholar
- Roger Grosse, Ruslan R Salakhutdinov, William T Freeman, and Joshua B Tenenbaum. 2012. Exploiting compositionality to explore a large space of model structures. arXiv preprint arXiv:1210.4856 (2012). Google Scholar
Digital Library
- Jonathan H Huggins and Daniel M Roy. 2015. Convergence of sequential Monte Carlo-based sampling methods. arXiv preprint arXiv:1503.00966 (2015).Google Scholar
- Jin H Kim and Judea Pearl. 1983. A computational model for causal and diagnostic reasoning in inference systems.. In IJCAI, Vol. 83. 190–193. Google Scholar
Digital Library
- Scott Kirkpatrick, C Daniel Gelatt Jr, and Mario P Vecchi. 1987. Optimization by simulated annealing. In Spin Glass Theory and Beyond: An Introduction to the Replica Method and Its Applications. World Scientific, 339–348.Google Scholar
- Oleg Kiselyov. 2016. Probabilistic programming language and its incremental evaluation. In Asian Symposium on Programming Languages and Systems. Springer, 357–376.Google Scholar
Cross Ref
- Martin Kučera, Petar Tsankov, Timon Gehr, Marco Guarnieri, and Martin Vechev. 2017. Synthesis of Probabilistic Privacy Enforcement. Analysis 1 (2017), 1.Google Scholar
- Tuan Anh Le, Atilim Gunes Baydin, and Frank Wood. 2016. Inference compilation and universal probabilistic programming. arXiv preprint arXiv:1610.09900 (2016).Google Scholar
- Wei Li, Peter Van Beek, and Pascal Poupart. 2006. Performing incremental Bayesian inference by dynamic model counting. In Proceedings of the National Conference on Artificial Intelligence, Vol. 21. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 1173. Google Scholar
Digital Library
- Jun S Liu and Rong Chen. 1995. Blind deconvolution via sequential imputations. J. Amer. Statist. Assoc. 90, 430 (1995), 567–576.Google Scholar
Cross Ref
- Vikash Mansinghka, Daniel Selsam, and Yura Perov. 2014. Venture: a higher-order probabilistic programming platform with programmable inference. arXiv preprint arXiv:1404.0099 (2014).Google Scholar
- Kevin P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. The MIT Press. Google Scholar
Digital Library
- Lawrence M Murray. 2013. Bayesian state-space modelling on highperformance hardware using LibBi. arXiv preprint arXiv:1306.3277 (2013).Google Scholar
- Lawrence M Murray, Daniel Lundén, Jan Kudlicka, David Broman, and Thomas B Schön. 2017. Delayed Sampling and Automatic Rao-Blackwellization of Probabilistic Programs. arXiv preprint arXiv:1708.07787 (2017).Google Scholar
- Praveen Narayanan, Jacques Carette, Wren Romano, Chung-chieh Shan, and Robert Zinkov. 2016. Probabilistic inference by program transformation in Hakaru (system description). In International Symposium on Functional and Logic Programming. Springer, 62–79.Google Scholar
Cross Ref
- Radford M Neal. 2001. Annealed importance sampling. Statistics and computing 11, 2 (2001), 125–139. Google Scholar
Digital Library
- Aditya V Nori, Chung-Kil Hur, Sriram K Rajamani, and Selva Samuel. 2014. R2: An Efficient MCMC Sampler for Probabilistic Programs.Google Scholar
- Aditya V Nori, Sherjil Ozair, Sriram K Rajamani, and Deepak Vijaykeerthy. 2015. Efficient synthesis of probabilistic programs. In ACM SIGPLAN Notices, Vol. 50. ACM, 208–217. Google Scholar
Digital Library
- Brooks Paige and Frank Wood. 2014. A compilation target for probabilistic programming languages. arXiv preprint arXiv:1403.0504 (2014). Google Scholar
Digital Library
- Nimrod Partush and Eran Yahav. 2014. Abstract semantic differencing via speculative correlation. ACM SIGPLAN Notices 49, 10 (2014), 811– 828. Google Scholar
Digital Library
- Daniel Ritchie, Andreas Stuhlmüller, and Noah Goodman. 2016. C3: Lightweight incrementalized MCMC for probabilistic programs using continuations and callsite caching. In Artificial Intelligence and Statistics. 28–37.Google Scholar
- Adrian FM Smith and Alan E Gelfand. 1992. Bayesian statistics without tears: a sampling–resampling perspective. The American Statistician 46, 2 (1992), 84–88.Google Scholar
- Andreas Stuhlmüller, Robert XD Hawkins, N Siddharth, and Noah D Goodman. 2015. Coarse-to-fine sequential monte carlo for probabilistic programs. arXiv preprint arXiv:1509.02962 (2015).Google Scholar
- Sebastian Thrun. 2002. Probabilistic robotics. Commun. ACM 45, 3 (2002), 52–57. Google Scholar
Digital Library
- John E. Wennberg, Elliott S. Fisher, David C. Goodman, and Jonathan S. Skinner. 2008. Tracking the Care of Patients with Severe Chronic Illness - The Dartmouth Atlas of Health Care 2008.Google Scholar
- David Wingate, Andreas Stuhlmueller, and Noah Goodman. 2011. Lightweight implementations of probabilistic programming languages via transformational compilation. In Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics. 770–778.Google Scholar
- Frank Wood, Jan Willem Meent, and Vikash Mansinghka. 2014. A new approach to probabilistic programming inference. In Artificial Intelligence and Statistics. 1024–1032.Google Scholar
- Yi Wu, Lei Li, Stuart Russell, and Rastislav Bodik. 2016. Swift: Compiled inference for probabilistic programming languages. arXiv preprint arXiv:1606.09242 (2016). Google Scholar
Digital Library
- Lingfeng Yang, Patrick Hanrahan, and Noah Goodman. 2014. Generating efficient MCMC kernels from probabilistic programs. In Artificial Intelligence and Statistics. 1068–1076.Google Scholar
- Jieyuan Zhang, Yulei Sui, and Jingling Xue. 2017. Incremental Analysis for Probabilistic Programs. In International Static Analysis Symposium. Springer, 450–472.Google Scholar
Index Terms
Incremental inference for probabilistic programs
Recommendations
Incremental inference for probabilistic programs
PLDI 2018: Proceedings of the 39th ACM SIGPLAN Conference on Programming Language Design and ImplementationWe present a novel approach for approximate sampling in probabilistic programs based on incremental inference. The key idea is to adapt the samples for a program P into samples for a program Q, thereby avoiding the expensive sampling computation for ...
Gen: a general-purpose probabilistic programming system with programmable inference
PLDI 2019: Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and ImplementationAlthough probabilistic programming is widely used for some restricted classes of statistical models, existing systems lack the flexibility and efficiency needed for practical use with more challenging models arising in fields like computer vision and ...
Incremental precision-preserving symbolic inference for probabilistic programs
PLDI 2019: Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and ImplementationWe present ISymb an incremental symbolic inference framework for probabilistic programs in situations when some loop-manipulated array data, upon which their probabilistic models are conditioned, undergoes small changes. To tackle the path explosion ...







Comments