We propose and study the known-compensation multi-armed bandit (KCMAB) problem, where a system controller offers a set of arms to many short-term players for T steps. In each step, one short-term player arrives at the system. Upon arrival, the player aims to select an arm with the current best average reward and receives a stochastic reward associated with the arm. In order to incentivize players to explore other arms, the controller provide proper payment compensations to players. The objective of the controller is to maximize the total reward collected by players while minimizing P i logthe total compensation. We first provide a compensation T lower bound where and are the expected reward gap and the Kullback-Leibler (KL) divergence between distributions of arm i and the best arm, respectively. We then analyze three algorithms for solving the KCMAB problem, and obtain their regrets and compensations. We show that the algorithms all achieve regret and compensation that match the theoretical lower bounds. Finally, we present experimental results to demonstrate the performance of the algorithms.