Last week, the National Highway Traffic Safety Administration, the agency whose mission is to "Save lives, prevent injuries, and reduce vehicle-related crashes," announced a recall of Tesla cars:
“On Wednesday the National Highway Traffic Safety Administration announced a massive recall of Teslas equipped with Full Self-Driving Beta, the technology that enables vehicles to control some aspects of driving, such as turning and adjusting speed, in urban environments. The FSD package, which currently costs Tesla owners an additional $15,000 when they buy their cars, requires the driver to be watching the road at all times (although Tesla enthusiasts have figured out ways to trick the cars’ attention guardrails for years). The NHTSA recall affects over 360,000 Teslas with FSD, which is pretty much all of them”
As usual, this raises many questions. In particular, the cause of the recall, who decides to recall, and what has to be recalled are all interesting questions in this case. So let’s delve into it.
What was recalled?
The first question, of course, is, what was the main issue for the recall? Over the years, these recall notices became a norm but were usually linked to a braking system or airbags. In other words, they were typically linked to a dysfunctional or dangerous component.
This time around, according to NHTSA, the recall is of the Full Self Driving (FSD) system, which
“may allow the vehicle to act unsafe around intersections, such as driving straight through an intersection while in a turn-only lane.” That sounds bad, as do other FSD behaviors cited by the federal car-safety agency, including speeding, rolling through stop signs, and running yellow traffic lights “without due caution.”
It is important to note that this is a voluntary recall jointly agreed upon by Tesla and NHTSA. Tesla owners will be notified of the availability of the remedy by April 15 and will be able to download it.
There are many things to note here, but the first is that we will most likely see more of this type of recalls over the next few years and less of the component type. Over the last few years, cars have become more reliable and reasonably safe:
“Despite the increased adoption of complex vehicle technology, dependability continues to improve,” said Dave Sargent, vice president of global automotive at J.D. Power. “There’s no question that three-year-old vehicles today are better built and more dependable than same-age vehicles were in previous years.”
Don’t get me wrong, cars are still not completely safe, given the speed they travel. Still, this safety has less to do with mechanical and electrical components, and more with the software used to operate them (and, of course, the human element).
There are many implications for these software-related quality issues.
Cars always have externalities on their environment, from pedestrians to other vehicles to just random Seven-Elevens. So any quality issue impacts not only the driver but also their surrounding. This is true if the malfunction is electrical or mechanical, but even truer with software, which aims to interpret the environment and decide how the car should behave in such settings. In particular, the software will determine whether to yield to a pedestrian crossing the road and the distance it needs to maintain to the next car when the road is icy. So software-related issues have the potential to impact more people than standard quality issues, a fact that is supposed to make us even more sensitive to such software-related issues.
In reality, we are not ready yet to address these issues properly. Many questions are only partially answered, as this recall illustrates.
In particular,
How do you ensure that self-driving systems are safe?
Who is supposed to ensure they are safe?
What do you do when you realize they are not safe? And by “you,” I mean the entity that is supposed to make it safe rather than the firm. The firm clearly should fix these issues.
Let’s take the questions one at a time.
How do you verify that these software systems are safe? How do you train such an autonomous driving system to be safe given the fact that accidents are actually a fairly rare occurrence, which means the autonomous system will have to drive billions of miles to find edge cases that put the car in a situation that truly tests how the vehicle reacts to these cases.
Rahul Mangaram is one of the people that have been thinking about this question for quite some time. His team found a solution:
“In this work, we build a test harness for driving an arbitrary AV’s code in a simulated world. We demonstrate this harness by using the game Grand Theft Auto V (GTA) as world simulator for AV testing. Namely, our AV code, for both perception and control, interacts in real-time with the game engine to drive our AV in the GTA world, and we search for weather conditions and AV operating conditions that lead to dangerous situations. This goes beyond the current state-of-the-art where AVs are tested under ideal weather conditions, and lays the ground work for a more comprehensive testing effort.”
So, while we all imagine quality tests as crash test dummies, they actually involve playing games.
This brings us to the next question - who is supposed to conduct these tests? Clearly, the car makers. But is anyone verifying that they are doing it? Not so much:
“the U.S. does use pre-approval to improve transportation safety—just not for cars. But if a plane manufacturer is designing a new piece of software or hardware, the company must work closely with the Federal Aviation Administration to get the go-ahead prior to deployment…. But for autos, the U.S. has basically said to carmakers, “You’re good. We trust you.” Manufacturers place a sticker on each new vehicle stating that it complies with the Federal Motor Vehicle Safety Standards, and they’re all set”
It seems the regulator is waiting for something to happen. And then what? They issue a recall, which brings me to the next question.
What does it mean to recall software?
Now that we have determined that there is a malfunction, either because the car is deemed as making the wrong decisions or is making decisions that other people make (sliding into the junction and not stopping at a stop sign, which it learned by mimicking other drivers), the decision has been made to fix the system. But since this is a downloadable software fix, is it really a recall?
Some people don’t like to call this a recall. And by some people, I mean a person:
“Tesla CEO Elon Musk on Thursday criticized the National Highway Transportation Safety Administration (NHTSA) for labeling a recent Tesla software update as a recall. “The word ‘recall’ for an over-the-air software update is anachronistic and just flat wrong!” Musk said on Twitter.”
And he is not wrong. The self-driving software will continue to be updated, and most software products we use get continuously updated. We are not even aware of the update; Instructure, the firm that makes our Learning Management System, doesn’t let us know when it fixed a bug or improved its student analytics system. At the same time, we all agree that updating the Netflix movie recommendation software or Twitter’s ranking that promotes some people above others are not at the same level of criticality as car software, so referring to the Tesla FSD update as merely a software update is a little misleading.
But still, is recall the right term for it? I am not sure.
And that brings me to the point of what it means to announce a recall.
Recalls as a Game
When it comes to announcing a recall, there are two key questions: why announce it, and who should announce it?: the regulator, or should the firm try to preempt the announcement and take responsibility earlier, showing that it cares about quality issues and aims to address them, rather than being forced to address them.
One can imagine multiple audiences for such recalls:
The vehicle owners- alerting them that they may need to fix something.
The OEM - forcing them to fix something, which they may or may not have wanted to fix,
And finally, prospective buyers – alerting them to potential negligence of a specific maker.
Rob Bray and Ahmet Colak studied this game between the OEMs and the regulator using a massive data set of recalls:
First, they found that firms and regulators respond to different issues when they recall cars:
“Particularly, we find that firms are more inclined to recall complex components in newer luxury models with fewer registrations, whereas the government is more inclined to recall simple components in older low-end models with larger registrations. Furthermore, we find firms more averse to complaints that involve acceleration issues, airbag deployments, and hydraulics; we find the government more averse to complaints that involve fires, melting parts, and lights.”
My interpretation: firms are more reactive to high-end customers (luxury models, acceleration, etc.), while the government cares about the rest (low-end models and fires). While I would like the firms to care about both, this is somewhat rational.
But the paper's most important and interesting result is trying to understand whether firms try to preempt a government recall:
“We observe no evidence of regulatory deterrence in the context of SaferCar defects: we estimate that government recall threats did not meaningfully affect manufacturer recall probabilities. This result implies that the manufacturers responded to complaints directly due to increased perceived failure rates but not indirectly due to increased likelihood of government recalls.”
One way to explain this is using another result from the same paper showing that automakers do not incur additional penalties from government preemptions. This is interesting since it means automakers have no benefits in preempting the government and do not suffer any negative consequences from having the government announce the recall (rather than doing it themselves). One can imagine different reasons for that. One of them is the disparity in the type of recalls. Possibly, automakers make sure to recall issues that affect the customers that care about recalls. But this is just a conjecture.
But if we describe this as a game: at least in the world where most recalls are associated with mechanical issues, government preemption doesn’t seem to be a huge deterrence or inflict a disproportional cost on the automakers that may deter them from continuing and operating with an issue.
But maybe we are getting into new territory. Over the period the study above was done, the automotive market and technology were fairly stable. Most brands were fairly safe, and the recalls (beyond a few) were not significant.
This is about to change. The Technology that is currently at stake is definitely not (yet) safe. Furthermore, many new automakers have significant growth baked into their valuations. They are much more part of the “go big or go home” mentality, which may not work for mission-critical systems like cars. The misalignment between the risk aversion of the public to accidents and the risk-seeking mentality of the founders may not be aligned, which is precisely where the regulator needs to step in and act as a deterrence.
But this game is important for the technological development of self-driving cars. I professed that I am very bullish on the autonomous cars industry. But for this industry to be ubiquitous, founders need to push the envelope while remaining as careful as possible. However, founders will always be more optimistic than they should be. And regulators should be more cautious than the founders.
So I do hope that consumers will react to this recall (so the notion of a recall will have a deterrence effect), and I do hope it will make the regulator more actively involved in testing these systems in real time as they are developed. Also, I do hope that recalls continue to happen since these systems are still in their infancy. And while it’s ok if ChatGPT spews nonsense, we want our car to operate on a much more consistent and safe level.
Interesting article!
One typo: "First, they found that firms and regulars" => regulators
I thought I'd offer some thoughts, at least from my experience.
This is actually a very well thought out and written article, so thank you for sharing. You've done an excellent job explaining what was recalled and typically how manufacturers use software to test their systems in a virtual world. And as far as validations, it is very true that the government relies on the manufactures to self-verify they meet all the government requirements. Since there are so many requirements spread over so many products, self-governing and documenting is the only way to validate these systems with any sort of speed, especially as changes are needed mid-production.
As for recalling software: Recalling software is nothing new. What has changed is that we can now do it over the air. Prior to this, the software would be fully validated before being released on the product. If there was an issue, the customer would go to the dealership and get the new software. It functioned like any other recall. Secondly, this is not the first software recall that Tesla has had. This is what I have found especially concerning with over the air updates. While we can get some fun new features in the infotainment, these updates are changing safety critical systems like brakes and steering. See the example below:
https://www.usatoday.com/story/money/cars/2022/11/12/tesla-recall-vehicles-steering-hazard/10680782002/
While the government does have requirements that they trust the OEMs to validate for the sake of speed, these still take time to do. Software updates, to me at least, feel like they are moving at a speed faster than the time it takes to validate and document these tests, so I'm a bit skeptical on how elaborate Tesla's documentation is for each over-the-air update. Thirdly, it's hard to draw a line over calling a software update a "recall" or not. I would argue, anything updating the software of safety-critical systems would be subject to the term "recall". Is it affecting brakes or steering? Yes. Is it adding Netflix to your infotainment? No. Given this, I would think self-driving software should be treated no differently from the software that has traditionally controlled your steering, brakes, or engine/motor, as it is touching every one of those systems. Therefore, it is deserving of the title "recall".
Your paper says that the government is more likely to be averse to recalls including fires, melting parts, and lights. I've actually been apart of one of these. I like that you set up recalls as a reoccurring game, because during my experience, the firm I was working for at the time was deathly afraid of getting another fine from NHTSA and was willing to do whatever it took to fix the recall within the 60 day span NHTSA requires. Maybe it's anecdotal, but I have personally seen this reoccurring game work in the government's favor. In this case, the OEM self reported it to NHTSA within 24 hours of finding out about the issue.
Lastly, when it comes to reliability, I would think that self driving software, much like other software in the vehicle, would need be reliable to the point that the company should feel legally obligated if any issues occur. OEMs release both hardware and software knowing that there will be some failures, somewhere. It's a statistical inevitability. But.... this statistical inevitability should be quantified and used as measure of risk, both for the company and it's customers. Until the manufactures think they can handle the financial and public relations risk of a system failure, engineers should be testing and validating these systems, like they have for every other part of their vehicles in the past. And that is where I think Tesla has been ethically dubious, in that they are having customers pay to be beta testing a system for Tesla and potentially putting their safety at risk. They often do this hiding behind the legal obligation for their customers to be ready to take over the vehicle at any time. This is a level 2 system after all. But clearly, humans by nature don't react instantaneously to every situation.
I am 100% on board with autonomous technology and it's future potential, but I am hoping that this recall puts some caution into Tesla's FSD development team when it comes to pushing out updates. However, I feel it may require further recalls to let Tesla know of this is a repeated game with consequences.