Abstract
Machine learning algorithms aim at minimizing the
number of false decisions and increasing the accuracy of predictions. However, the high predictive
power of advanced algorithms comes at the costs
of transparency. State-of-the-art methods, such as
neural networks and ensemble methods, result in
highly complex models with little transparency.
We propose shallow model trees as a way to combine simple and highly transparent predictive models for higher predictive power without losing the
transparency of the original models. We present
a novel split criterion for model trees that allows
for significantly higher predictive power than stateof-the-art model trees while maintaining the same
level of simplicity. This novel approach finds split
points which allow the underlying simple models to
make better predictions on the corresponding data.
In addition, we introduce multiple mechanisms to
increase the transparency of the resulting trees