资源论文Inference and Learning for Probabilistic Description Logics

Inference and Learning for Probabilistic Description Logics

2019-11-20 | |  49 |   38 |   0
Abstract Inference and Learning for Probabilistic De Dipartimento di Ingegneria Via Saragat 1, riccar The last years have seen an exponential increase in the in-terest for the development of methods for combining probability with Description Logics (DLs). These methods are very useful to model real world domains, where incompleteness and uncertainty are common. This combination has become a fundamental component of the Semantic Web. Our work started with the development of a probabilistic semantics for DL, called DISPONTE (”DIstribution Semantics for Probabilistic ONTologiEs“, Spanish for ”get ready“). DISPONTE applies the distribution semantics [Sato, 1995] to DLs. The distribution semantics is one of the most effective approaches in logic programming and is exploited by many languages, such as Independent Choice Logic, Probabilistic Horn Abduction, PRISM, pD, Logic Programs with Annotated Disjunctions, CP-logic, and ProbLog. Under DISPONTE we annotate axioms of a theory with a probability, that can be interpreted as an epistemic probability, i.e., as the degree of our belief in the corresponding axiom, and we assume that each axiom is independent of the others. DISPONTE, like the distribution semantics, defines a probability distribution over regular knowledge bases (also called worlds). To create a world, we decide whether to include or not each probabilistic axiom, then we multiply the probability of the choices done to compute the probability of the world. The probability of a query is then obtained from the joint probability of the worlds and the query by marginalization. Consider the Knowledge Base (KB) below: 0.5 :: ?hasAnimal.P et v N atureLover (1) 0.6 :: Cat v P et (2) tom : Cat (kevin, tom) : hasAnimal fluffy : Cat (kevin, fluffy) : hasAnimal It indicates that the individuals that own an animal which is apet are nature lovers with a 50% probability and cats are pets with a 60% probability. Moreover, kevin owns the animals fluffy and tom which are both cats. The KB has four possible worlds: {{(1), (2)}, {(1)}, {(2)}, {}} and the query axiom Q = kevin : N atureLover is true in the first of them, while in the remaining ones it is false. The probability of the queryis P (Q) = 0.5 · 0.6 = 0.3. Several algorithms have been proposed for supporting the development of the Semantic Web. Efficient DL reasoners, such us Pellet, RacerPro, and HermiT, are able to extract implicit information from the modeled ontologies. Despite the availability of many DL reasoners, the number of prob-

上一篇:The Spatio-Temporal Representation of Natural Reading

下一篇:Improvements of Symmetry Breaking During Search

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...