**Abstract**
Many algorithms for performing inference in graphical models have complexity that is exponential in
the treewidth - a parameter of the underlying graph
structure. Computing the (minimal) treewidth is NP-
complete, so stochastic algorithms are sometimes used
to find low width tree decompositions. A common approach for
finding good decompositions is iteratively
executing a greedy triangulation algorithm (e.g. min-fill)
with randomized tie-breaking. However, utilizing
a stochastic algorithm as part of the inference task introduces
a new problem - namely, deciding how long
the stochastic algorithm should be allowed to execute
before performing inference on the best tree decomposition found
so far. We refer to this dilemma as the Stopping Problem and
formalize it in terms of the total time
needed to answer a probabilistic query. We propose a
rule for discontinuing the search for improved decompositions
and demonstrate the benefit (in terms of time
saved) of applying this rule to Bayes and Markov network instances.