[an error occurred while processing this directive]

Publications & Technical Reports

Anytime Approximate Inference in Graphical Models
Qi Lou

Graphical models are a powerful framework for modeling interactions within complex systems. Reasoning over graphical models typically involves answering inference queries, such as computing the most likely configuration (maximum a posteriori or MAP) or evaluating the marginals or normalizing constant of a distribution (the partition function); a task called marginal MAP generalizes these two by maximizing over a subset of variables while marginalizing over the rest. Exact computation of these queries is known to be intractable in general, leading to the development of many approximate schemes, the major categories of which are variational methods, search algorithms, and Monte Carlo sampling. Within these, anytime techniques that provide some guarantees on the correct value, and can be improved with more computational effort, are valued for quickly providing users with confidence intervals or certificates of accuracy and allow users to decide the desired balance of quality, time and memory. In this dissertation, we develop a series of approximate inference algorithms for the partition function and marginal MAP with anytime properties by leveraging ideas and techniques from the three inference paradigms, and integrating them to provide hybrid solutions that inherit the strengths of all three approaches. We propose anytime anyspace best-first search algorithms that provide deterministic bounds on the partition function and marginal MAP. These best-first search schemes take advantage of both AND/OR tree search and optimized variational heuristics. We then extend this approach to give anytime probabilistic confidence bounds via a dynamic importance sampling algorithm, which interleaves importance sampling (using proposal distributions extracted from the variational bound) with our best-first search algorithm to refine the proposal. We also propose a framework for interleaving sampling with the optimization of the initial variational bound, which can automatically balance its computational effort between the two schemes. Overall, we show that our hybrid algorithms perform significantly better than existing methods, giving flexible approaches with excellent anytime confidence bounds.