Abstract
Statistical relational models such as Markov logic
networks (MLNs) and hinge-loss Markov random
fields (HL-MRFs) are specified using templated
weighted first-order logic clauses, leading to the
creation of complex, yet easy to encode models that
effectively combine uncertainty and logic. Learning the structure of these models from data reduces
the human effort of identifying the right structures. In this work, we present an asynchronous
deep reinforcement learning algorithm to automatically learn HL-MRF clause structures. Our algorithm possesses the ability to learn semantically
meaningful structures that appeal to human intuition and understanding, while simultaneously being able to learn structures from data, thus learning
structures that have both the desirable qualities of
interpretability and good prediction performance.
The asynchronous nature of our algorithm further
provides the ability to learn diverse structures via
exploration, while remaining scalable. We demonstrate the ability of the models to learn semantically
meaningful structures that also achieve better prediction performance when compared with a greedy
search algorithm, a path-based algorithm, and manually defined clauses on two computational social
science applications: i) modeling recovery in alcohol use disorder, and ii) detecting bullying