< Back to previous page

Project

The automatic blind spot camera: hard real-time detection of moving objects from a moving camera

Each year traffic accidents caused by the blind spot zone of trucks are responsible for an estimate of about 1300 casualties in Europe alone. To cope with these blind spot zones, several commercial systems were developed. However, each of them has specific disadvantages, and as such none of them seems to handle the blind spot problem completely. The most widely used safety systems are the - since 2003 obliged by law - blind spot mirrors. Furthermore, often blind spot cameras are employed. These systems display the blind spot zone on a monitor in the truck's cabin - using wide-angle lenses - when a right hand turn is signalled. More recently, active safety systems are used (such as ultrasonic distance sensors), which automatically generate an alarm towards the truck driver. However, these systems fail to distinguish static objects (i.e. traffic signs) from VRUs and often generate false positive alarms.

Therefore, in this dissertation we describe an active safety system relying solely on the input images from the blind spot camera. Using computer vision object detection methodologies, our system is able to efficiently detect VRUs in the challenging blind spot images, and automatically warns the truck driver of their presence. Such a system has several advantages: it is always adjusted correctly, is easily integratable in existing passive blind spot camera setups, does not rely on the attentiveness of the truck driver and is able to distinguish VRUs from static objects. However, developing such a safety system is not an easy task. These VRUs are multiclass (they consist of pedestrians, bicyclists, children and so on) and appear in very diverse viewpoints and poses. Additionally, we need to cope with the large viewpoint and lens distortion induced by traditional blind spot cameras. Finally, our specific application inherently requires extremely stringent demands with respect to the detection accuracy, throughput and latency. Indeed, excellent accuracy results need to be achieved for such a system to be usable in real-life scenarios, at real-time processing speeds. However, assuring hard real-time detection behaviour contradicts with the requirement for high detection accuracy. Specifically, object detection methodologies often require significant computational power to achieve high accuracy. As such, traditionally a trade-off exists between accuracy and throughput when only limited hardware is available. This is unfeasible for our application: our active safety system should achieve excellent accuracy results and at the same time should run in real-time on low-cost embedded hardware. We developed a methodology that eliminates this trade-off. The advantage of this contribution is two-fold. First, it allows for the detection of VRUs in challenging images where existing object detectors fail (due to the specific viewpoint and lens distortion). Second, this approach enables the use of highly accurate object detection methodologies which would otherwise be too time consuming. As such, we achieve excellent accuracy at real-time processing speeds. To validate this, we acquired a unique and valuable dataset recorded with a genuine blind spot camera mounted on a real truck in which several dangerous blind spot scenarios were simulated. This dataset increased in size and complexity throughout this dissertation. This initial methodology enabled the efficient detection and tracking of pedestrians in our blind spot camera images. We proved that this methodology easily generalises itself to other scenarios with similar viewpoint. Furthermore, we presented additional contributions towards an increase in detection accuracy. We developed a methodology that enables the efficient combination of multiple pedestrian detectors, extended our initial approach to better model the specific distortion and developed a method to enable multiclass detection. Finally, we further optimised and integrated the aforementioned methodologies, and presented a final vision-based only active safety system for the blind spot zone. We conclude that our final active safety system manages to meet the stringent accuracy and latency demands required for such a system to be usable in practice.

Date:1 Mar 2010 →  30 Sep 2016
Keywords:blind spot camera
Disciplines:Sensors, biosensors and smart sensors, Other electrical and electronic engineering
Project type:PhD project