Traffic Lights Recognition (TLR) public benchmarks

Urban scene 1

Dataset

© Read the copyrights information before any use.

Urban database - Frame 5164/11179 On-board vehicle acquisition in a dense urban environment.

11 179 frames (8min 49sec, @25FPS)
640×480 (RGB, 8bits)
Paris (France)

Acquisition description:
acquired from the C3 vehicle, camera sensor Marling F-046C (@ 25Hz), lens 12mm, camera was mounted behind the interior rear-view mirror, vehicle speed is < 50km/h (< 31 mph)


Urban database - Map of trajectory (Paris, France) Urban database - Frame 2093/11179 Urban database - Frame 6621/11179 Urban database - Frame 7486/11179 Urban database - Frame 8752/11179 Urban database - Frame 9599/11179

Downloads

Sequence and groundtruths data are available publicly and for free.
The sequences can be downloaded either as MPEG-2, JPEG single files, JSEQ, or RTMaps (cf. below). Ground truths can also be downloaded from the same page. Since it exists several file format for Ground Truth we chose to distribute our files in all the main formats: GT (text formatting), CVML, and VIPER.

Data are also available as RTMaps files which contain raw acquisition data (such as: camera output with timestamps data). RTMaps is a real time multisensor prototyping software which we use as on-board application to record our acquisitions and then to replay the latter. More information are available on the RTMaps compagny website.

We will be pleased to publish the result of your Traffic Light Recognition algorithm on our website. As long as you use the same databases (or if your databases are public).

Sequence 11179 frames (640×480, RGB, 8bits)

Ground Truth files v0.5 (9168 hand-labeled traffic lights)

Benchmarks

Here are listed the performance of the algorithms on the above described sequences. For more information about the evaluation, please refer to FAQ section below.
If you want your algorithm to be listed in this section contact us and send us your result (cf. Publishing your results).

Robotics Centre of Mines ParisTech and Imara Team of INRIA (May, 1st 2010)

(Raoul de Charette1 and Fawzi Nashashibi1,2, 2010)

Traffic Light Recognition (TLR) algorithm from Robotics Centre of Mines
ParisTech & INRIA.
...database_urban_scene_1_-_caor_hci_2tp-1fn_55m.jpgdatabase_urban_scene_1_-_caor_hci_1tp_70m_.jpg

Download high res. video 1min44 (XVID, 240MB)
Download low res. video 1min44 (XVID, 20MB)

Publications
[1] R. de Charette and F. Nashashibi, “Real time visual traffic lights recognition based on Spot Light Detection and adaptive traffic lights templates,” 2009 IEEE Intelligent Vehicles Symposium, Xian: IEEE, 2009, pp. 358-363. (read on IEEEXplore - ACM)
[2] R. de Charette and F. Nashashibi, “Traffic light recognition using image processing compared to learning processes,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Saint Louis: IEEE, 2009, pp. 333-338. (read on IEEEXplore - ACM)
Please note these publications do not describe the current state of our traffic light recognition system. It has evolved consequently since these two publications.
A new publication will be published describing the whole system.

1 Robotics Centre of Mines ParisTech, France (CAOR - Centre de Robotique)
2 Imara Team, INRIA Rocquencourt, France (IMARA - Informatique, Mathématiques et Automatique pour la Route Automatisée)

Electrical and Computer Engineering Department, University of Patras, Rio, Patras, Greece (2012)

[1] G. Siogkas, E. Skodras, and E. Dermatas, “Traffic Lights Detection in Adverse Conditions Using Color, Symmetry and Spatiotemporal Information,” in International Conference on Computer Vision Theory and Applications (VISAPP 2012), 2012, pp. 620–627. (read on Patra's website - Research Gate)

FAQ

How can I add the performance of my algorithm on this page ?

Please refer to the section Publishing your results.

Which are the objects labeled in the sequence ?

So far, only Traffic Lights (with circular light). But since we made this sequence public, if you are a do-gooder feel free to label others objects in the sequence and to send us the new ground truth file that we will be pleased to add on this webpage.

How many objects are labeled in the Ground Truth file ?

The ground truth file contains 9 168 instances of traffic lights, hand-labeled.
Traffic lights detail is as follows: 3 381 “green” (called 'go'), 58 “orange” (called 'warning'), 5 280 “red” (called 'stop'), 449 “ambiguous” (cf. below).

What is called an "ambiguous traffic light" ?

During the labeling process our human operator noticed several ambiguous regions for which they had issue to decide whether it was a real traffic light (with circle light) or not. We thus decided to simply ignored these ambiguous regions during the evaluation. Therefore, any traffic light detected in these regions won't be taken into account neither as a “false positive” nor as a “true positive”.
Indeed, there are very few number of “ambiguous” regions and they were strictly labeled “ambiguous” only if they validate one of the following conditions:

  • Reflection distortion. The region is a reflection of an object which seems to be a traffic light
  • Light shape not valid. The light of the traffic light appears circle were it is in fact a rectangle (usually due to CCD approximation or motion blur)
  • Too blurry. The traffic light is 'too' blurry during its whole timeline (usually due to vehicle turning, vehicle pitch, or potholes) (for instance, frames 3568-3616)
  • Too small. The traffic light is too small during its whole timeline. (for instance, frame 9 200)
  • Not facing the vehicle. The traffic light is not facing the vehicle but the light is still visible. (for instance, frame 9 302)
  • Lower traffic light. The small and lower traffic lights under the big one are ignored. The latter are specific to France (for instance, frame 5 260)

What is the minimum size of labeled traffic lights ?

Traffic lights were labeled as soon as they are 5 pixels wide or more.

Why are there objects with coordinates off-limit ?

Coordinates out-of-bounds (negative or superior than image width/height) are due to the traffic lights partially visible (leaving the camera Field Of View). These occluded traffic lights are ignored during the evaluation.

Which objects are used to evaluate the performance of the algorithms ?

All the objects used for the evaluation of the performance are those which are: not set as “ambiguous” (cf. above), entirely visible (not partially occluded), and not “warning”/orange (due to the very few number of traffic lights).
Finally, 8 437 instances of traffic lights are used for the evaluation (731 were ignored because of partial occlusion, 423 due to 'ambiguous' status, and 58 because it is 'warning').

Publishing your results

In order to publish your performance on this webpage, please send us the result of your algorithm on the above described sequences. The “recognition result file” should be written in one of the following formats: CVML, VIPER, or GT. It is also possible to use our tool (cf. Tools section) to generate easily the file.
It is up to you to give details about your algorithm or to attach a video of your results which we will publish also on this webpage.

The “result file” (as well as any additional information) should be sent to Raoul de CHARETTE: raoul.de_charette{ARO_BASE}mines-paristech.fr

Notice that the performance are computed according to the rules described in the FAQ Section and are (of course) exactly the same for all the algorithms.

Copyrights

All data are free, publicly available and can be used for any research purposes.
However, if you publish results (or make public your tests in any other way) please acknowledge that data are coming from the Robotics Centre of Mines ParisTech and are publicly available at: http://www.lara.prd.fr/benchmarks/trafficlightsrecognition


Commercial use is NOT ALLOWED without our official agreement.

Contact

For any information or for question about commercial use please contact Raoul de CHARETTE: raoul.de_charette{ARO_BASE}mines-paristech.fr

 
benchmarks/trafficlightsrecognition.txt · Last modified: 2013/10/29 01:28 by Raoul de CHARETTE
Recent changes · Show pagesource · Login