Abstract
The deep two-stream architecture [23] exhibited excellent performance on video based action recognition. The most computationally expensive step in this approach comes from the calculation of optical flflow which prevents it to be real-time. This paper accelerates this architecture by replacing optical flflow with motion vector which can be obtained directly from compressed videos without extra calculation. However, motion vector lacks fifine structures, and contains noisy and inaccurate motion patterns, leading to the evident degradation of recognition performance. Our key insight for relieving this problem is that optical flflow and motion vector are inherent correlated. Transferring the knowledge learned with optical flflow CNN to motion vector CNN can signifificantly boost the performance of the latter. Specififically, we introduce three strategies for this, initialization transfer, supervision transfer and their combination. Experimental results show that our method achieves comparable recognition performance to the state-of-the-art, while our method can process 390.7 frames per second, which is 27 times faster than the original two-stream method.