Abstract
We present a real-time body orientation estimation in a micro-Unmanned Air Vehicle video stream.This work is part of a fully autonomous UAV system which can maneuver to face a single individual in challenging outdoor environ-ments.Our body orientation estimation consists of the fol-lowing steps:(a)obtaining a set of visual appearance mod-els for each body orientation,where each model is tagged with a set of scene information(obtained from sensors);(b)exploiting the mutual information of on-board sensors using latent-dynamic conditional random fields(LDCRF);(c)Characterizing each visual appearance model with the most discriminative sensor information;(d)fast estimation of body orientation during the test flights given the LDCRF parameters and the corresponding sensor readings.The key aspects of our approach is to add sparsity to the sen-sor readings with latent variables followed by long range dependency analysis.Experimental results obtained over real-time video streams demonstrate a significant improve-ment in both speed(15-fps)and accuracy(72%)compared to the state of the art techniques that only rely on visual data.Video demonstration of our autonomous flights(both from ground view and aerial view)are included in the sup-plementary material.