A modest proposal for an alternative or adjunct to the SAE levels of automation
Writing a post like this has been on my to-do list for about five years, but always down at the bottom of the list. Recently, however, I’ve seen more disenchantment with the “levels” of automation, so it seems like a good moment to propose an alternative.
Back in 2013, the US National Highway Traffic Safety Administration (NHTSA) issued their “Preliminary Statement of Policy Concerning Automated Vehicles”, which included definitions for “levels” of automation, starting at Level 0 (no automation) and ending at Level 4 (the vehicle can drive itself with no need for any intervention from a human). In an effort to further clarify, the Society of Automotive Engineers (SAE) developed their own taxonomy of “levels” in 2014, starting at Level 0 (no automation) and ending at Level 5 (the vehicle can drive itself anywhere and in any situation a human can, with no need for any intervention from a human). (See this post for a quick overview.)
The SAE’s taxonomy has become dominant, but it has had its critics. Rather than reviewing the various critiques out there, I’ll briefly point out what I see as its two main shortcomings.
First, the SAE’s use of the word “levels” is confusing. The word connotes a step-wise progression from someplace lower to someplace higher. However, not all of the SAE levels fit neatly with that idea of progression.
A level 3 system, where the driver need not continually monitor the automated system but must be standing by to take over control when the system requests, would clearly have to be a smarter, more capable system than a level 2 system, where the driver is supposed to be continually monitoring and ready to take over with no notice whatsoever if and when they observe that the automated system is failing, or about to fail.
Things get muddy with level 4. A level 4 vehicle can drive without any human support or backup, as long as it is within its designated “operational design domain”—a range of situations and environments defined according to parameters such as road type, maximum speed, weather conditions, and time of the day. That operational design domain could be quite restricted; for example, the automated system might be capable of driving the vehicle with no human intervention only when on low-speed roads that exclude pedestrians and cyclists, and in the absence of heavy rain, snow, or fog.
Compared to such a level 4 system, a level 3 system might be considerably more technologically advanced. That of course explains why level 4 systems have been around for years already while developers of self-driving technology are still working on level 3. The idea of a stepwise upward progression doesn’t hold here. Rather than “levels”, it would be more coherent for the SAE to talk about “categories” of automation.
The second main problem is that the SAE levels go too far in boiling down the complexity of automated systems.
Consider level 4 again. The most advanced level 4 vehicles today (probably Waymo’s) are certainly more advanced than the most advanced level 4 vehicles were a few years ago. But in both cases, we simply call them level 4. If we want to be more specific, we need to add some other qualifier; we might have level 4 with highly restricted operational design domain versus level 4 with moderately restricted operational design domain, for example.
That’s quite a straightforward solution, and it’s approximately what I’ll propose here as an alternative, or adjunct, to the SAE levels.
Categorizing automated vehicles in a two-dimensional framework
A caveat before continuing: what follows is merely a conceptual proposed framework. It would need to be fleshed out before it can be put to use in the real world.
We can get a useful description of an automated vehicle by answering two questions: what is the role of the human in the vehicle, and in what situations can the automated system operate (i.e., what is its operational design domain)? The SAE’s taxonomy of levels is a valiant attempt to compress those two dimensions into a one-dimensional scale. Unfortunately, the resultant scale isn’t really linear and it obscures critical information.
Instead of awkwardly trying to squeeze all the information into a one-dimensional scale, we can simply refer explicitly to the two dimensions, human role and operational design domain, to more precisely categorize automated vehicles.
Regarding the role of the human in the vehicle, let’s specify three divisions:
- the human must continually monitor road and immediately take over control, with no warning from the automated system, when the system is driving unsafely, or is at risk of doing so
- the human need not monitor the road, but must be available to take over control within a specified period of time when requested by the automated system
- the human is a passenger and will not be requested to intervene
(I’m ignoring automated systems where the human’s role is to do everything, or to steer, or to control the vehicle’s speed. I’ve also left out a description of a human role that’s included in the SAE’s taxonomy, which can be paraphrased as “the human is a passenger and may be requested to take over, but if they fail to do so, the automated system will be able to safely pull the vehicle over.”)
Regarding the operational design domain, a very pared-down set of divisions might be binary, for example:
- limited operational design domain (the automated system cannot operate everywhere and in every situation a human can)
- unlimited operational design domain (the automated system can operate everywhere and in every situation a human can)
Alternatively, we could construct a scale, numbered from 1 to 10, for example, where 1 means a very restricted operational design domain, 10 means unlimited, and 2 to 9 specify intermediate degrees.
Combining the two dimensions—the human role and the operational design domain—gives a table like this one:
In the table above, I’ve designated the human role as “monitor”, “standby”, or “passenger”; there may well be more evocative descriptions we could use.
Filling out the table gives categories of automation like “Monitor-3” and “Passenger-8”, where the number refers to a particular degree on the scale of operational design domains:
If we were to fill the column headings with descriptions of the role of the automated system (rather than the human’s role), possible descriptions could include “collateral driver” (or “partial”, “parallel”, “supplementary”, “subordinate”, “dependent, etc.), “temporary driver” (or “conditional”, etc.), and “chauffeur”. That would give category names like “Collateral-3” and “Chauffeur-8”:
The next table illustrates how this system might correspond to particular technologies—and I’ll emphasize that this is purely for sake of illustration, the numbers in the operational design domain rows don’t refer to anything specific in this current discussion. Tesla’s “Autopilot” feature might be a Monitor-4, for example, and Audi’s promised SAE Level 3 technology might be a Standby-4. Vancouver’s SkyTrain is obviously not a road vehicle, but I’ve put it in the table to illustrate the idea of a very limited operational design domain. The SkyTrain is a driverless train running on dedicated tracks that are protected from intrusions by grade separation or by fences. Its automated driving system wouldn’t be smart enough to drive the train safely in more challenging environments; it might be a Passenger-1 system. A low-speed driverless shuttle like EasyMile’s EZ10 might be a Passenger-3; Waymo’s vehicles might currently be in the Passenger-6 category. The SkyTrain, the EZ10, and Waymo’s vehicles would fall under the SAE designation of level 4. An SAE Level 5 vehicle would correspond to Passenger-10.
We can also use these categories to specify how an automated system should not be used. For example, if Tesla’s “Autopilot” is a Monitor-4, the human driver should not try to use it as a Monitor-5, nor as a Standby-4, and certainly not as a Passenger-10.
One last point to wrap up: when an automated vehicle is still in the testing stages, the most relevant description of the technology specifies how it’s actually operating during testing, rather than how it’s intended to be used in the future when testing is complete. For example, say an automated vehicle will be a Chauffeur-5 system once testing is complete. During the development and testing process, however, a human safety driver (or drivers) must be on hand to detect and correct any failures in the system. So, during testing, the most accurate description of the vehicle would be Monitor-5.