Diversity-Inducing Policy Gradient:
Using Maximum Mean Discrepancy to Find a Set of Diverse Policies
Abstract
Standard reinforcement learning methods aim to
master one way of solving a task whereas there may
exist multiple near-optimal policies. Being able
to identify this collection of near-optimal policies
can allow a domain expert to efficiently explore the
space of reasonable solutions. Unfortunately, existing approaches that quantify uncertainty over policies are not ultimately relevant to finding policies
with qualitatively distinct behaviors. In this work,
we formalize the difference between policies as a
difference between the distribution of trajectories
induced by each policy, which encourages diversity with respect to both state visitation and action
choices. We derive a gradient-based optimization
technique that can be combined with existing policy gradient methods to now identify diverse collections of well-performing policies. We demonstrate
our approach on benchmarks and a healthcare task