|Not All ADAS Vehicles Created Equal
In the tests, performed both on the road and on test tracks, IIHS found that some models struggled “in typical driving situations, such as approaching stopped vehicles and negotiating hills and curves.”
IIHS is a Virginia-based, nonprofit organization funded by auto insurers.
The five Level 2 models that IIHS used for their testing were a 2017 BMW 5-series with “Driving Assistant Plus,” a 2017 Mercedes-Benz E-Class with “Drive Pilot,” a 2018 Tesla Model 3 and 2016 Model S with “Autopilot” (software versions 8.1 and 7.1, respectively), and a 2018 Volvo S90 with “Pilot Assist.”
IIHS’s ADAS tests have exposed a large variability of Level 2 vehicle performance under a host of different scenarios. These systems can fail under any number of circumstances. In some cases, certain models equipped with ADAS are apparently blind to stopped vehicles and could even steer directly into a crash.
Indeed, the test results can be confusing.
When IIHS tested the system with adaptive cruise control turned off but automatic braking on, at 31 mph, both Teslas — the Model S and Model 3 — braked but still hit a stationary vehicle. According to IIHS, they were the only two models that failed to stop in time during tests.
And yet, when the same test was repeated with ACC engaged, the BMW 5-series, Mercedes-Benz E-Class, and Tesla Model 3 and Model S braked earlier and gentler than with emergency braking and still avoided the stationary vehicle.
IIHS acknowledged that it’s still “crafting a consumer ratings program for ADAS.” The institute noted, “IIHS can’t say yet which company has the safest implementation of Level 2 driver assistance.”
You can read the IIHS test results here.
Building blocks of L2 vehicles
Phil Magney, founder and principal at VSI Labs, explained that L2 systems are largely vision-first systems, often with the help of radar.
[Vision systems] maintain their lane keeping with their vision algorithms. If the lines become obscured in any way, the performance degrades. If the lines are gone, they simply will not work and cannot be engaged.
All these solutions are enabled with radar as well, which gives them their dynamic speed control when following other vehicles.
Most L2 solutions (including all tested) are further enabled with automated emergency braking (AEB) that is designed to mitigate collisions with stationary vehicles. This feature is typically enabled with radar and/or camera.
What causes variability?
Magney said, “A lot of performance variance is found on these systems because there are so many elements of the HW/SW configurations.”
For example, active lane keeping is a multi-step process that partitions lane detection from control systems, said Magney. “Each of these steps has a unique set of code with its own parameters. In tight turns, these solutions can fail depending on the look-ahead settings, which are necessary to calculate the curvature.”
Magney added that the ACC pipeline is equally complex. “ACC regulates the longitudinal velocity of a vehicle based on the kinematics of the host vehicle and the target vehicle.”
The primary goal of ACC, as Magney sees it, “is to apply throttle and brakes in order to match the speed to that of the target vehicle. Both comfort and safety are key features for ACC, but in some cases, safety will take priority over comfort to avoid a collision.”
Unique to radar-based calculations is all the filtering necessary to avoid false positives, he noted. “For example, if you are traveling at speed on an expressway and are overtaking a slower car in an adjacent lane, you must be certain that what you choose to brake for is within your trajectory!”
False positives related to ACC happen occasionally when a vehicle brakes despite nothing in its path. Magney observed, “Most Tesla owners have experienced this. It’s not necessarily dangerous as it is annoying.”
ACC Pipeline (Source: VSI Labs)
Other details also contribute to varied performance. Demler cited “difference in the sensors, where they are positioned,” and “differences in the steering control mechanisms,” among others. He added, “All the variances in design factors come into play between manufacturers as well as between models from the same manufacturer.”
Should we define ADAS performance standards?
A case in point, as pointed out by IIHS, was: “One of the questions researchers looked to answer is, do the systems handle driving tasks as humans would?” The report said, “Not always, tests showed.” It explained, “When they didn’t perform as expected, the outcomes ranged from the irksome, such as too-cautious braking, to the dangerous — for example, veering toward the shoulder if sensors couldn’t detect lane lines.”
EE Times asked if this would be the time to start defining acceptable ADAS performance standards — for safety reasons. Is anyone talking about it?
Demler said that he isn’t aware of anyone specifically discussing ADAS standards using a set of tests like those done by IIHS. But he agreed: “This is definitely an argument for doing it.”
He said, “I also expect that it would go into the New Car Assessment Program (NCAP) rating, but lane-keep assist and ACC aren’t mandated features. The car magazines and consumer reports do follow the same test procedures on all cars that they evaluate, so that information is available to car buyers.”
Magney concurred. “An argument for establishing performance standards can be made to level the expectations in terms of capabilities,” he said. “A protocol for doing this is pragmatic and a natural extension of existing safety agencies. A Level 2 automated system should perform well against its intended design domain. This would not include an all-out scenario test but, rather, a defined protocol that examines measurable performance against defined targets.”
Demler doesn’t believe that IIHS is equipped to design and implement rigorous tests. He sees what’s reported in this report as “just subjective evaluations.” He said, “We need the National Highway Traffic Safety Administration (NHTSA) to implement standards, but the SAE and manufacturers should get together to drive that.”Magney added, “In order to rate automated driving features, they would need to be tested against a devised set of scenarios on various type of road segments and in various conditions. Probably a pass/fail-type test based on multiple runs.
Magney also believes that it takes “a more refined approach to attempt to rate these features.” He said, “We don’t know how the course was set, but I think that in some examples, the curve testing and hill testing were perhaps outside the normal operating domain.”