资源论文Improving Model Counting by Leveraging Definability

Improving Model Counting by Leveraging Definability

2019-11-25 | |  129 |   46 |   0
Abstract We present a new preprocessing technique for propositional model counting. This technique leverages definability, i.e., the ability to determine that some gates are implied by the input formula ?. Such gates can be exploited to simplify ? without modifying its number of models. Unlike previous techniques based on gate detection and replacement, gates do not need to be made explicit in our approach. Our preprocessing technique thus consists of two phases: computing a bipartition hI, Oi of the variables of ? where the variables from O are defined in ? in terms of I, then eliminating some variables of O in ?. Our experiments show the computational benefits which can be achieved by taking advantage of our preprocessing technique for model counting.

上一篇:Constraint Detection in Natural Language Problem Descriptions

下一篇:Planning with Task-Oriented Knowledge Acquisition for a Service Robot

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...