Abstract
Anomaly detection in videos refers to the identification of
events that do not conform to expected behavior. However,
almost all existing methods tackle the problem by minimizing the reconstruction errors of training data, which cannot guarantee a larger reconstruction error for an abnormal event. In this paper, we propose to tackle the anomaly
detection problem within a video prediction framework. To
the best of our knowledge, this is the first work that leverages the difference between a predicted future frame and
its ground truth to detect an abnormal event. To predict a
future frame with higher quality for normal events, other
than the commonly used appearance (spatial) constraints
on intensity and gradient, we also introduce a motion (temporal) constraint in video prediction by enforcing the optical flow between predicted frames and ground truth frames
to be consistent, and this is the first work that introduces
a temporal constraint into the video prediction task. Such
spatial and motion constraints facilitate the future frame
prediction for normal events, and consequently facilitate
to identify those abnormal events that do not conform the
expectation. Extensive experiments on both a toy dataset
and some publicly available datasets validate the effectiveness of our method in terms of robustness to the uncertainty in normal events and the sensitivity to abnormal
events. All codes are released in https://github.
com/StevenLiuWen/ano_pred_cvpr2018.