Abstract
Lifted inference algorithms for first-order logic
models, e.g., Markov logic networks (MLNs), have
been of significant interest in recent years. Lifted
inference methods exploit model symmetries in order to reduce the size of the model and, consequently, the computational cost of inference. In this
work, we consider the problem of lifted inference
in MLNs with continuous or both discrete and continuous groundings. Existing work on lifting with
continuous groundings has mostly been limited to
special classes of models, e.g., Gaussian models,
for which variable elimination or message-passing
updates can be computed exactly. Here, we develop
approximate lifted inference schemes based on particle sampling. We demonstrate empirically that
our approximate lifting schemes perform comparably to existing state-of-the-art models for Gaussian
MLNs, while having the flexibility to be applied to
models with arbitrary potential functions