A Unifying Contrast Maximization Framework for Event Cameras (CVPR'18)



Views:2550|Rating:5.00|View Time:2:19Minutes|Likes:33|Dislikes:0
We present a unifying framework to solve several computer vision problems with event cameras: motion, depth and optical flow estimation. The main idea of our framework is to find the point trajectories on the image plane that are best aligned with the event data by maximizing an objective function: the contrast of an image of warped events. Our method implicitly handles data association between the events, and therefore, does not rely on additional appearance information about the scene. In addition to accurately recovering the motion parameters of the problem, our framework produces motion-corrected edge-like images with high dynamic range that can be used for further scene analysis. The proposed method is not only simple, but more importantly, it is, to the best of our knowledge, the first method that can be successfully applied to such a diverse set of important vision tasks with event cameras.

Reference: Guillermo Gallego, Henri Rebecq, Davide Scaramuzza. A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth and Optical Flow Estimation.
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, June 2018.
PDF:

Our research page on event based vision:

For event-camera datasets and event camera simulator, see here:

Other resources on event cameras (publications, software, drivers, where to buy, etc.):

Affiliations: G. Gallego, H. Rebecq and D. Scaramuzza are with the Robotics and Perception Group, Dep. of Informatics, University of Zurich, and Dep. of Neuroinformatics, University of Zurich and ETH Zurich, Switzerland

Leave a Reply

Your email address will not be published. Required fields are marked *