Distracted Driver Detection System Using Deep Learning Technique

Distracted Driver Detection System Using Deep Learning Technique

Varan Singh Rohila, Vijay Kumar, Karan Kumar Barnwal
DOI: 10.4018/978-1-7998-3299-7.ch006
OnDemand:
(Individual Chapters)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

Improvement of public safety and reducing accidents are the intelligent system's critical goals for detecting drivers' fatigue and distracted behavior during the driving project. The essential factors in accidents are driver fatigue and monotony, especially on rural roads. Such distracted behavior of the driver reduces their thinking ability for that particular instant. Because of this loss in decision-making ability, they lose control of their vehicle. Studies tell that usually the driver gets tired after an hour of driving. Driver fatigue and drowsiness happens much more in the afternoon, early hours, after eating lunch, and at midnight. These losses of consciousness could also be because of drinking alcohol, drug addiction, etc. The distracted driver detection system proposed in this chapter takes a multi-faceted approach by monitoring driver actions and fatigue levels. The proposed activity monitor achieves an accuracy of 86.3%. The fatigue monitor has been developed and tuned to work well in real-life scenarios.
Chapter Preview
Top

Introduction

There are several driver monitoring systems for offices and vehicles available in the market. These are generally based on capturing the image, facial detection, and composite image processing. The work done in eye detection and tracking the face is divided into two categories such as (Wang et al. 2007)

  • 1.

    Passive appearance-based methods

  • 2.

    The active infra-red (IR)-based methods

The basic building block of such systems is facial detection algorithms that is implemented using artificial intelligence or SVMs. The percentage eye closure (PERCLOS) is an eye detection method. PERCLOS method performs well on driving simulators. This method records the total time that the driver’s eyes are closed more than 80%. Additional IR methods are used to serve an IR spotlight on the driver’s face in case of low light (Abouelnaga et al. 2018).

Baheti et al. (Baheti et al. 2018) presented a method that utilizes two cameras, an IR spotlight consisting of a twenty-eight IR LED grid, and a computer with computer vision algorithms. In another work, Dinges et al. (Dinges et al. 1998) incorporated two different IR sources with distinct wavelengths. Infrared wavelengths of 850nm and 950nm is used to obtain images. Brightness of the pupil in the images is different. Whereas, the rest of images are identical. PERCLOS computes the difference among the photographs that yield eyelids. Fig. 1 demonstrates this conjuncture.

Figure 1.

The driver’s pupil: (a) first image; (b) second image from another IR sensor; (c) difference image

978-1-7998-3299-7.ch006.f01

Visual distractions are coined in the following terms: “sleepiness”, “drowsiness”, “fatigue”, and “inattention”. And, they usually depend on facial landmarks detection and tracking. Manual distractions are mainly concerned with driver’s activities other than safe driving (i.e., reaching behind, adjusting hair and makeup, or eating and drinking). In this kind of distraction, authors often tend to depend heavily on hand tracking and driving posture estimation. In this paper, we focus only on “manual” distractions where a driver is distracted by texting or using cell phone, calling, eating or drinking, reaching behind, fiddling with the radio, adjusting hair and makeup, or talking to a passenger.

Many research projects constitute fatigue detection wherein the camera is pointed directly at driver’s gaze. Jiangwei et al. (Jiangwei et al. 2004) extracted the mouth shape and its position to determine whether the driver is yawning. In beginning, the seeing machines tool uses two operational cameras, one placed on the left and another on the driver’s right. A processing unit finds the similarities between specific point-of-interests from two images that computes the spatial position of every segment. It determines blink rate, eye-opening, and eye gazing, even for driver with glasses. Fatigued drivers usually narrow their field of view and look directly ahead, which reduces their assessment of the mirrors and instruments. This makes monitoring their gaze all the more important. However, the non-commercial systems are costly, which renders them unobtainable for drivers (Horng et al. 2004).

Some researchers devised controlled cost schemes using embedded devices and cameras for fatigue detection. Suganya et al. (Suganya et al. 2017) utilized a Raspberry Pi and IR-sensitive video camera. In another work, Palani and Kothandaraman (Palani et al. 2013) utilized a regular camera and computer in laboratory settings with some constraints. However, it did not work well for drivers with dark skin. This kind of bias is what we aim to reduce. The presented model is generalized and works well irrespective of the characteristics of the driver.

Complete Chapter List

Search this Book:
Reset