Abstract
We propose a new end-to-end question answering model, which learns to aggregate answer evidence from an incomplete knowledge
base (KB) and a set of retrieved text snippets. Under the assumptions that the structured
KB is easier to query and the acquired knowledge can help the understanding of unstructured text, our model first accumulates knowledge of entities from a question-related KB
subgraph; then reformulates the question in the
latent space and reads the texts with the accumulated entity knowledge at hand. The evidence from KB and texts are finally aggregated
to predict answers. On the widely-used KBQA
benchmark WebQSP, our model achieves consistent improvements across settings with different extents of KB incompleteness.