Abstract
Reinforcement Learning agents are expected to
eventually perform well. Typically, this takes the
form of a guarantee about the asymptotic behavior of an algorithm given some assumptions about
the environment. We present an algorithm for a
policy whose value approaches the optimal value
with probability 1 in all computable probabilistic
environments, provided the agent has a bounded
horizon. This is known as strong asymptotic optimality, and it was previously unknown whether it
was possible for a policy to be strongly asymptotically optimal in the class of all computable probabilistic environments. Our agent, Inquisitive Reinforcement Learner (Inq), is more likely to explore the more it expects an exploratory action to
reduce its uncertainty about which environment it
is in, hence the term inquisitive. Exploring inquisitively is a strategy that can be applied generally;
for more manageable environment classes, inquisitiveness is tractable. We conducted experiments
in “grid-worlds” to compare the Inquisitive Reinforcement Learner to other weakly asymptotically
optimal agents