Abstract
Probabilistic logical languages provide power-ful formalisms for knowledge representation and learning. Yet performing inference in these lan-guages is extremely costly, especially if it is done at the propositional level. Lifted inference algo-rithms, which avoid repeated computation by treat-ing indistinguishable groups of objects as one, help mitigate this cost. Seeking
inspiration from logical inference, where lifted inference (e.g., resolution) is commonly performed, we develop a model theo-
retic approach to probabilistic lifted inference. Our algorithm compiles a first-order probabilistic the-ory into a first-order deterministic decomposable negation normal form (d-DNNF) circuit. Compi-lation offers the advantage that inference is poly-
nomial in the size of the circuit. Furthermore, by borrowing techniques from the knowledge compi-lation literature our algorithm effectively exploits the logical structure (e.g., context-specific indepen-dencies) within the first-order model, which allows more computation to be done at the lifted level.An empirical comparison demonstrates the utility of the proposed approach.