Abstract
Event cameras are bio-inspired vision sensors whichmimic retinas to measure per-pixel intensity change ratherthan outputting an actual intensity image. This proposedparadigm shift away from traditional frame cameras of-fers significant potential advantages: namely avoiding highdata rates, dynamic range limitations and motion blur.Unfortunately, however, established computer vision algo-rithms may not at all be applied directly to event cameras.Methods proposed so far to reconstruct images, estimate optical flow, track a camera and reconstruct a scene comewith severe restrictions on the environment or on the mo-tion of the camera, e.g. allowing only rotation. Here, wepropose, to the best of our knowledge, the first algorithm tosimultaneously recover the motion field and brightness im-age, while the camera undergoes a generic motion throughany scene. Our approach employs minimisation of a costfunction that contains the asynchronous event data as wellas spatial and temporal regularisation within a sliding win-dow time interval. Our implementation relies on GPU opti-misation and runs in near real-time. In a series of examples, we demonstrate the successful operation of our framework,including in situations where conventional cameras sufferfrom dynamic range limitations and motion blur.