Abstract
Explainable planning is widely accepted as a prerequisite for autonomous agents to successfully
work with humans. While there has been a lot of
research on generating explanations of solutions to
planning problems, explaining the absence of solutions remains a largely open and under-studied
problem, even though such situations can be the
hardest to understand or debug. In this paper, we
show that hierarchical abstractions can be used
to efficiently generate reasons for unsolvability of
planning problems. In contrast to related work on
computing certificates of unsolvability, we show
that our methods can generate compact, humanunderstandable reasons for unsolvability. Empirical
analysis and user studies show the validity of our
methods as well as their computational efficacy on
a number of benchmark planning domains