Drowsy driving is a big problem. The National Highway Traffic Safety Administration estimates that in 2017, drowsy drivers in the United States caused roughly 90,000 crashes, 50,000 injuries, and 800 deaths.
While manufacturers work toward autonomous vehicles, they are not ready to let a driver sit back and nap. Navigating a vehicle through an unpredictable world full of inattentive pedestrians, inclement weather, and deteriorating roads remains far beyond the most advanced artificial intelligence (AI). Most vehicles won’t advance beyond Level 3 autonomy for the foreseeable future, requiring the driver to remain alert and take over whenever the vehicle requests. Drowsiness will still pose a problem.
Cars must determine the driver’s state of awareness, and potential solutions vary. Some track drivers’ eyelids, but these systems have trouble in some lighting conditions or if a driver looks away. Others assess driver inputs, such as steering wheel movement, but this won’t work in an autonomous vehicle when the driver isn’t steering. However, new research from Graz University of Technology finds a new way to determine drowsiness from the electrical activity of the heart through the clever application of AI.
Patterns in the Data
Training robust machine learning algorithms for classification requires considerable amounts of labeled examples. To generate labeled training examples, the Austrian team used a custom-built driving simulator. This is not some video game or a racing seat like you would see at an arcade. The Automated Driving Simulator of Graz (ADSG) starts with an entire MINI Countryman car. Eight LCD panels surround the driver, wind and engine noise come through the sound system, and bass speakers vibrate the whole apparatus.
“The basic idea was to create a huge, unique database of drowsy drivers that would also be available for public use.”
Arno Eichberger, engineering professor at Graz University of Technology
“It’s very realistic,” says Arno Eichberger, an engineering professor at Graz and the team leader. “And for the specific study here on drowsiness, it’s even better because this type of monotonous driving is not that difficult to simulate.”
They recreated a nighttime drive on a highway without any traffic. “For some of the drivers, we got a complete micro sleep,” he says.
The team collected data on 92 drivers. Each participant came to the lab twice: once while rested, and once when fatigued. In the fatigued condition, they were required to have been awake for at least 16 hours or to have slept for no more than four hours the previous night. On each visit, they participated in manual- and automated-driving scenarios.
“The basic idea was to create a huge, unique database of drowsy drivers that would also be available for public use,” Eichberger says.
While the researchers collected varied data on their drivers, including eye movement, respiration, perspiration, gaze direction, and pupil dilation, this study relied solely on heart activity measured with electrocardiogram (ECG) electrodes.
Objectively measuring drowsiness remains a research challenge, but for their study, the Graz researchers created ground truth labels by asking traffic psychologists to watch recordings of the drivers and give their best assessment based on yawning, head nodding, and long blinks. The psychologists provided one of four labels: alert, moderately drowsy, extremely drowsy, and falling asleep. This study combined the last two into extremely drowsy.
Eichberger says it is important to have at least three classes of drowsiness because if you have only two—alert and extremely drowsy—then when a car warns you that you’re extremely drowsy, you’re already in danger.
Eichberger’s team used a form of machine learning called deep learning, which involves multilayered neural networks. They built a convolutional neural network, a network designed specifically to process spatial input. This type of algorithm can look at a cat’s photo, discern complex patterns in the pixels, and identify the image as a cat.
Previous drowsiness detection methods had used manually coded rules to process ECG signals, which arrive as a complex waveform. Each time the heart beats, it produces something in the ECG called an R-peak. Programmers have told their software to look for those R-peaks, measure the length of time between them, and calculate how much those spans vary, producing a statistic called heart-rate variability, which correlates with drowsiness. But these methods might miss other important information hidden in the ECG signal that researchers don’t know to look for. The power of deep learning is its ability to find those subtle patterns, the way humans build intuition through experience.
So how do you apply a convolutional neural network, designed for images, to an ECG signal, which is just a sequence of electrical amplitudes? And why would you do that? To answer the how: You turn the waves into an image.
Sadegh Arefnezhad, the lead author of the paper on the new method, published in Energies, used Wavelet Toolbox™ in MATLAB® to create wavelet scalograms. Time-series data can be considered the sum of many brief “wavelets” of different frequencies. MATLAB decomposes the wave into these simpler wavelets, with time on the x-axis and frequency on the y-axis. The brightness at each point in the wavelet scalogram represents the amplitude of the wavelet of that frequency at that time.
Why transform a wave into a shaded image before feeding it to a neural network? “The idea is that a time-frequency view of a signal can make relevant characteristics more apparent than in the raw time-domain data,” says Wayne King, a principal software engineer at MathWorks. “Importantly, creating images allowed the researchers to take advantage of convolutional neural nets, which computer scientists have finely honed over the years.”
Arefnezhad fed these images, along with ground-truth drowsiness labels, into a neural network, which he constructed in MATLAB using Deep Learning Toolbox™. “It was very user-friendly,” Arefnezhad says. “I could add different types of layers and easily make my own neural net.” He trained it by having it classify ECG signals as alert, moderately drowsy, or extremely drowsy. The neural net adjusts itself based on whether it is right or wrong.
Balancing Imbalanced Data
An important layer at the end of the network considers the fact that the data was imbalanced. In the manual driving tests, for example, only 6% of the samples came from extremely drowsy drivers. An algorithm would be right nearly all the time if it just guessed the other two labels. So Arefnezhad added a layer that placed extra emphasis on extremely drowsy samples during training. Some other researchers, he said, feed their algorithms data split evenly between drowsy and not drowsy.
Neural networks consist of parameters that define the strengths of connections between virtual neurons. These parameters change during training. They also use hyperparameters during training. Researchers set these values to control the learning rate (how much the parameters change in response to feedback) and how much noise to add during training (which affects the network’s robustness). Some people pick hyperparameters based on rules of thumb, and some use brute-force search to try many. Arefnezhad employed Bayesian optimization, which uses probability theory to narrow the search over time.
The team tested the network on images it had not seen and compared its performance to two other machine learning methods, both of which relied on features manually extracted from the ECG data. First, they collected all the intervals between R-peaks. Then, they calculated 11 values, such as the standard deviation within a set of intervals. They fed these values into one of two classifiers, a k-nearest neighbor (KNN) model and a random forest. The best of these baseline methods, the random forest, achieved 62% accuracy when classifying drowsiness in manual driving modes and 64% in automated modes.
The deep learning neural network outperformed these methods. It achieved 77% and 79% accuracy, respectively. Arefnezhad was surprised by its ability to find the right clues in the scalograms. Looking at the images, “you cannot see that much difference between alert and moderately drowsy drivers,” he says, “but the neural network easily identifies the difference.”
The Road Ahead
Eichberger and Arefnezhad see many paths forward for this research. A clear hindrance to practical application is that they collected ECG data using a chest electrode, which everyday drivers won’t be wearing. Other sensors, such as smartwatches, might take the chest electrode’s place. Researchers are also developing camera systems that can detect the pulse from fluctuations in skin color. “It’s not our intent to have a market-ready solution,” Eichberger says. “We intended to demonstrate that with a feasible technology, driver drowsiness classification is possible in a better way than we know at the moment.”
They also hope to combine their ECG data with other data to make the system more robust in case one signal fails. They would like to create personalized classifiers since signals for one person might mean something different from those for another. Fine-tuning a classifier could require drivers to spend some time in a simulator providing data.
Eichberger and Arefnezhad plan to move from a stationary simulator to a test track. That may help them approach another problem: “At the moment, nobody knows how you should design the takeover procedure when the vehicle fails,” Eichberger says. “How should it tell the driver to take charge? How much time should it allow?”
Taking over will go much more smoothly if a car can keep drivers out of moderately drowsy states. “So, knowing when drivers are just moderately drowsy—perhaps even before they do,” Eichberger says, “is a huge improvement.”
This post was originally published on the MathWorks blog.