The computers integrated into vehicles could soon determine if a driver is intoxicated simply by analyzing their facial features, according to researchers. This technology, by continuously monitoring the driver for typical signs of inebriation, holds the potential to significantly reduce drunk driving incidents.

The project, detailed in a paper published on April 9th during an Institute of Electrical and Electronics Engineers (IEEE) and Computer Vision Foundation (CVF) conference, empowers in-car computing systems to evaluate a driver’s intoxication level with 75% accuracy as soon as they enter the vehicle.

Unlike existing computer-aided methods that depend on observable behaviors such as steering patterns, pedal usage, and vehicle speed—which require the vehicle to be in motion for some time—this new system employs a single color camera to monitor variables like gaze direction and head position. The comprehensive system can also integrate 3D and infrared footage of the driver’s face, rearview videos showing driver posture, steering interactions, event logs, and screen recordings of driving behavior.

“Our system has the capability to identify intoxication levels at the beginning of a drive, allowing for the potential prevention of impaired drivers from being on the road,” said Ensiyeh Keshtkaran, a doctoral student at Edith Cowan University, Australia, who contributed to the project. She further explained that the software can be seamlessly integrated into the digital architectures of smart vehicles, such as eye-tracking and driver monitoring systems, making it easily adaptable to environments like smartphones.

The World Health Organization (WHO) estimates that alcohol impairment is involved in 20% to 30% of fatal car accidents worldwide. In Australia, where this project originated, 30% of fatal crashes involve blood alcohol levels exceeding the legal limit of 0.05%.

“Although efforts are underway to integrate driver alcohol detection systems into future vehicle generations, and the advent of autonomous cars is on the horizon, the persistent issue of drunk driving remains an urgent concern,” Keshtkaran emphasized.

The study utilized video footage of drivers of varying ages, drinking habits, and driving experience using simulators under three levels of intoxication: sober, low intoxication, and severely intoxicated. In collaboration with software company MiX by Powerfleet, data was collected from alcohol-impaired drivers in controlled yet realistic environments. The algorithm analyzed the video footage for discernible facial cues of intoxication and successfully predicted a driver’s state in 75% of cases. Common visual cues of intoxication include bloodshot eyes, a flushed face, droopy eyelids, and a dazed look, as noted by the Oregon Liquor and Cannabis Commission.

Project lead Syed Zulqarnain Gilani, a senior lecturer in the School of Science at Edith Cowan University, stated that the next steps involve enhancing the resolution of the image data received by the algorithm to make even more accurate predictions. “If low-resolution videos are proven sufficient, this technology can be employed by surveillance cameras installed on roadsides,” Gilani mentioned.

For now, the findings mark a significant advancement as the system can identify intoxication levels before the car even moves. This innovation could lead to a future where smart cars won’t start if a drunk driver is detected behind the wheel, or they could even alert authorities if a driver is too intoxicated.

By Impact Lab