Abstract
Algorithm selection approaches have achieved impressive performance improvements in many areas of AI. Most of the literature considers the of-
fline algorithm selection problem, where the initial selection model is never updated after training.
However, new data from running algorithms on instances becomes available while an algorithm selection method is in use. In this extended abstract,
the online algorithm selection problem is considered. In online algorithm selection, additional data
can be processed, and the selection model can
change over time. This abstract details the online algorithm setting, shows that it is a contextual
multi-armed bandit, proposes a solution methodology, and empirically validates it.