Counterfactuals in Explainable Artificial Intelligence (XAI):
Evidence from Human Reasoning
Abstract
Counterfactuals about what could have happened
are increasingly used in an array of Artificial Intelligence (AI) applications, and especially in explainable AI (XAI). Counterfactuals can aid the provision of interpretable models to make the decisions
of inscrutable systems intelligible to developers and
users. However, not all counterfactuals are equally
helpful in assisting human comprehension. Discoveries about the nature of the counterfactuals that humans create are a helpful guide to maximize the effectiveness of counterfactual use in AI