Abstract
Agents may use ontology alignments to communicate when they represent knowledge with different ontologies: alignments help reclassifying objects from one ontology to the other. These alignments may not be perfectly correct, yet agents have
to proceed. They can take advantage of their experience in order to evolve alignments: upon communication failure, they will adapt the alignments
to avoid reproducing the same mistake. Such repair experiments had been performed in the framework of networks of ontologies related by alignments. They revealed that, by playing simple interaction games, agents can effectively repair random networks of ontologies. Here we repeat these
experiments and, using new measures, show that
previous results were underestimated. We introduce new adaptation operators that improve those
previously considered. We also allow agents to go
beyond the initial operators in two ways: they can
generate new correspondences when they discard
incorrect ones, and they can provide less precise answers. The combination of these modalities satisfy
the following properties: (1) Agents still converge
to a state in which no mistake occurs. (2) They
achieve results far closer to the correct alignments
than previously found. (3) They reach again 100%
precision and coherent alignments