Abstract
We propose a data-driven, unobtrusive and covert method for automatic deception detection in interrogation interviews from visual cues only. Using skin blob analysis together with Active Shape Modeling, we continuously track and analyze the motion of the hands and head as a sub ject is responding to interview questions, as well as their facial mi- cro expressions, thus extracting motion profiles, which we aggregate over each interview response. Our novelty lies in the representation of the mo- tion profile distribution for each response. In particular, we use a kernel density estimator with uniform bins in log feature space. This scheme allows the representation of relatively over-controlled and relatively agi- tated behaviors of interviewed sub jects, thus aiding in the discrimination of truthful and deceptive responses.