Low-latency event-based visual odometry 2013-10-02

Synchronized DVS events and CMOS frames.

A.C. and Davide Scaramuzza. Low-latency event-based visual odometry. In IEEE International Conference on Robotics and Automation (ICRA). May 2014. pdf supp. material slidesbibtex

Abstract: The agility of a robotic system is ultimately limited by the speed of its processing pipeline. The use of a Dynamic Vision Sensors (DVS), a sensor producing asynchronous events as luminance changes are perceived by its pixels, makes it possible to have a sensing pipeline of a theoretical latency of a few microseconds. However, several challenges must be overcome: a DVS does not provide the grayscale value but only changes in the luminance; and because the output is composed by a sequence of events, traditional frame-based visual odometry methods are not applicable. This paper presents the first visual odometry system based on a DVS plus a normal CMOS camera to provide the absolute brightness values. The two sources of data are automatically spatiotemporally calibrated from logs taken during normal operation. We design a visual odometry method that uses the DVS events to estimate the relative displacement since the previous CMOS frame by processing each event individually. Experiments show that the rotation can be estimated with surprising accuracy, while the translation can be estimated only very noisily, because it produces few events due to very small apparent motion.

Additional materials

One thought on “Low-latency event-based visual odometry

  1. Pingback: Voiture autonome : How to make robots and self-driving cars think faster | Proâme

Comments are closed.