Abstract Learning rational behaviors in First-person-shooter (FPS) games is a challenging task for Reinforcement Learning (RL) with the primary diffificulties of huge action space and insuffificient exploration. To address this, we propose a hierarchical agent based on combined options with intrinsic rewards to drive exploration. Specififically, we present a hierarchical model that works in a manager-worker fashion over two levels of hierarchy. The highlevel manager learns a policy over options, and the low-level workers, motivated by intrinsic reward, learn to execute the options. Performance is further improved with environmental signals appropriately harnessed. Extensive experiments demonstrate that our trained bot signifificantly outperforms the alternative RL-based models on FPS games requiring maze solving and combat skills, etc. Notably, we achieved fifirst place in VDAIC 2018 Track(1)