资源论文Premise Selection for Theorem Proving by Deep Graph Embedding

Premise Selection for Theorem Proving by Deep Graph Embedding

2020-02-10 | |  48 |   36 |   0

Abstract 

We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.

上一篇:Practical Bayesian Optimization for Model Fitting with Bayesian Adaptive Direct Search

下一篇:Hunt For The Unique, Stable, Sparse And Fast Feature Learning On Graphs

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...