Abstract In multitask reinforcement learning, tasks often have sub-tasks that share the same solution, even though the overall tasks are different. If the sharedportions could be effectively identifified, then the learning process could be improved since all the samples between tasks in the shared space could be used. In this paper, we propose a Sharing Experience Framework (SEF) for simultaneously training of multiple tasks. In SEF, a confifidence sharing agent uses task-specifific rewards from the environment to identify similar parts that should be shared across tasks and defifines those parts as shared-regions between tasks. The shared-regions are expected to guide task-policies sharing their experience during the learning process. The experiments highlight that our framework improves the performance and the stability of learning taskpolicies, and is possible to help task-policies avoid local optimums