Promoting excellence in mobility engineering

  1. FISITA Store
  2. Technical Papers

Multi-Level Sensorfusion and Computer-Vision Algorithms within a Driver Assistance System for Avoiding Overtaking Accidents
FISITA2008/F2008-08-062

Authors

Hohm, Andree* - Technische Universität Darmstadt, Germany
Wojek, Christian - Technische Universität Darmstadt, Germany
Schiele, Bernt - Technische Universität Darmstadt, Germany
Winner, Hermann - Technische Universität Darmstadt, Germany

Abstract

Keywords - overtaking, ADAS, sensor fusion, object-detection, tracking

On two-lane rural roads, a large number of overtaking accidents tend to happen. Those cause many serious casualties and fatalities. In many cases, inaccurate assessment of the traffic situation is identified as the major cause. Hence, the development of a driver assistance concept for those scenarios promises a high safety benefit.

This paper shows the sensory and data-fusion approach to a system which provides this assistance function.

The level of information about the car's environment, which is required for overtaking assistance, depends on the phase of the overtaking maneuver. In early stages, i.e. when the overtaking vehicle is in the situation just before the initial lane change, it is only necessary to get information about oncoming cars in the distance. For late stages in the scenario, i.e. when the overtaking speed is too low, dangerous situations can arise due to the fact that the gap in front of the car to be overtaken cannot be reached any more. In this case, it is necessary to calculate an evasion path, based on the perception of unoccupied space in front of the overtaking car.

A fusion of different automotive sensors is proposed in order to cover all parts of the overtaking scenario in the system's perception: Information about independently moving objects in front of the car is gained from a radar-device by exploiting the Doppler shift. Moreover, we employ a CMOS-camera sensor. Different algorithms are run on the camera's video stream: a texture-based free space detector as well as an object detection algorithm. Details of those algorithms are shown in further sections of the paper.

The proposed approach fuses object information from raw radar object data and the output of a video based object detection algorithm. As a result of this mid-level fusion, a list of moving objects in the whole range of the targeted field of view is obtained. For the free space part, a typical occupancy grid representation of the front car environment is employed for shorter distances in the field of view. This area is relevant for evasion maneuvers. The grid is filled by the camera free-space detection and is corrected with the known objects from the objectlist. Thus, a high-level grid fusion is obtained.

In particular, it is shown that the fusion of both sensor inputs is beneficial. First, it is possible to detect oncoming vehicles from a relatively high range with the radar device, whereas secondly, object detection from video frames becomes increasingly difficult for distant cars. In close range, both sensors benefit from the fusion of multiple cues. False positive detections can be filtered out and video object detections allow for an improved estimation of other vehicles' widths. Experimental results on real world data which has been recorded with a typical onboard system will be given in the results section.

Add to basket

Back to search results