Can Meta-Interpretive Learning Outperform Deep Reinforcement Learning of
Evaluable Game Strategies
World-class human players have been outperformed in a
number of complex two person games such as Go by Deep
Reinforcement Learning systems [Silver et al., 2016]. However, several drawbacks can be identified for these systems:
1) The data efficiency is unclear given they appear to require
far more training games to achieve such performance than
any human player might experience in a lifetime. 2) These
systems are not easily interpretable as they provide limited
explanation about how decisions are made. 3) These systems
do not provide transferability of the learned strategies to other
games. We study in this work how an explicit logical representation can overcome these limitations