Abstract
Reinforcement Learning (RL) allows an agent to
discover a suitable policy to achieve a goal. However, interesting problems for RL become complex
extremely fast, as a function of the number of features that compose the state space. The proposed
research is to decompose a core problem into tasks
with only the features required to solve the task.
The core agent then uses the reward for the task,
without knowing the underlying task model. This
paper discusses task-based RL and Inverse Reinforcement Learning to train the tasks.