Abstract
As the success of deep models has led to their deployment
in all areas of computer vision, it is increasingly important to understand how these representations work and what
they are capturing. In this paper, we shed light on deep spatiotemporal representations by visualizing what two-stream
models have learned in order to recognize actions in video.
We show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following.
First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn
local representations that are highly class specific, but also
generic representations that can serve a range of classes.
Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth,
visualizations can be used not only to shed light on learned
representations, but also to reveal idiosyncracies of training
data and to explain failure cases of the system. This document is best viewed offline where figures play on click.