Twin-Systems to Explain Artificial Neural Networks using Case-Based Reasoning:
Comparative Tests of Feature-Weighting Methods in ANN-CBR Twins for XAI
Abstract
In this paper, twin-systems are described to address the eXplainable artificial intelligence (XAI)
problem, where a black box model is mapped to a
white box “twin” that is more interpretable, with
both systems using the same dataset. The framework is instantiated by twinning an artificial neural network (ANN; black box) with a case-based
reasoning system (CBR; white box), and mapping the feature weights from the former to the
latter to find cases that explain the ANN’s outputs. Using a novel evaluation method, the effectiveness of this twin-system approach is demonstrated by showing that nearest neighbor cases can
be found to match the ANN predictions for benchmark datasets. Several feature-weighting methods
are competitively tested in two experiments, including our novel, contributions-based method (called
COLE) that is found to perform best. The tests consider the ”twinning” of traditional multilayer perceptron (MLP) networks and convolutional neural
networks (CNN) with CBR systems. For the CNNs
trained on image data, qualitative evidence shows
that cases provide plausible explanations for the
CNN’s classifications