Self-Driving Cars will Complicate before they Simplify

Timothy Blumberg
9 min readFeb 6, 2017

--

I want to own a driverless car because it will make my life easier. I will be able to spend my traveling time enjoying media, working, reading, writing, engaging with friends, sleeping and a whole host of other things. Activities that many Tesla drivers are apparently engaging in already, regardless of their car’s ability to handle dangerous situations.

I will remit that Tesla’s Autopilot is truly amazing feat of engineering. It is capable of reading and respond to traffic lights & speed limit signs, maintaining proper spacing with the car in front, managing in-lane steering, navigational turning and a whole host of other features.

However, because this system has been packaged as a seemingly all-inclusive “solution” to the monotony of driving, many members of the public assume the system will never require human intervention. Which is absolutely not the case. It is such a misnomer in fact, that Germany has mandated Tesla to stop calling its technology “Autopilot”.

Many of the readers may be familiar with the “Levels of Driving Automation” that is defined in SAE International’s J3016. This document explicitly defines 5 levels of automated Driving systems, and Tesla’s Autopilot is very clearly a Level 2 system.

Not all Automation Levels are Safe for Use

At CES 2017, Gill Pratt gave the world a much needed reality check on the current state of car automation technology. Pratt helped launch the Toyota Research Institute (TRI), which is cutting edge hub for automation research where Toyota pledged to invest $1 billion over the the next five years. He elucidated the many reasons why driverless car tech is still many years away from completion. He discussed the many ideological pitfalls that the public has currently been falling into with regards to the vehicular automation.

I have created a graphic for representing the relative dangers associated with the various levels of vehicle automation, because the relative danger can vary wildly in unexpected situations based on the level of vehicular automation. This variance in danger is due to the drifting attention spans of the human operator. If a driver is only expected to pay attention in emergency situations that the car doesn’t know how to handle, then drivers will not be engaged enough in the act of driving the car at the time of the incident in order to take control. Driver attentiveness drops off remarkably sharply as the time between required intervention incidents increases.

Note that Level 2 and 3 could pose to be more dangerous than regular driving in some scenarios

Level 1

The canonical example of Level 1 tech is braking assistance or lane guidance systems. Level 1 tech is defined by reducing cognitive load for drivers, while still requiring their active operation of the vehicle.

Level 2: The Most Dangerous Level of Automation

Partial Automation automation systems (Level 2) are slightly more dangerous because most pieces of the driving task in most driving environments are handled by the system itself. The definition of “driving environments” encompasses weather, traffic, city/highway etc. and most trips will involve several such environments.

“Partial Automation” implies that the driver is constantly monitoring the operation of the vehicle. Such a system wrongly assumes that at any time, the operators are engaged enough in the act of driving the car to seamlessly assume full control of the vehicle.

Level 3: Drivers Don’t Need to be Engaged

Level 3 automation systems do not require constant monitoring from their human operators and will transition control to the driver only after properly alerting them for the need to take control. Not surprisingly, the longer the driver is disengaged from the task of driving, the longer the response time is for them to get back up to speed (no pun intended) on the events surrounding the vehicle itself.

This level of automation represents a system while technically is more difficult to build, is similarly dangerous as Level 2 automation. In order to properly transition control of the car to the driver in unexpected conditions (such as a car wreck), the driver needs to adequately warned of this scenario at least 10 seconds before the human needs to regain control.

While for many scenarios, even if you were reading a book or texting your friends, it would be reasonable to assume that you could safely assume control of the vehicle within that time. Especially if the potential issue can be mapped and the car can be forewarned of the issue several minutes before the car will encounter it. Deteriorating weather conditions and areas with many jay-walking pedestrians are two simple scenarios that come to mind.

However, there are many more nuanced and sudden scenarios that come up that cannot be safely predicted that automated systems could be unable to properly cope with. Several example scenarios: debris on the interstate, car going the wrong way, small animals in the roadway, sudden and sporadic illumination from lightning, etc. While many of these feel contrived and rare, the average driver is sure to encounter them with enough frequency to understand the vast number of diverse situations that can arise on the roadway.

Furthermore, the automated system must not only be able to detect these numerous situations, but detect them with enough spare time to give the driver time to regain control. This task is difficult to reason about and theorized to nearly impossible to do with acceptable safety margins.

Level 4: Our aim for 5–10 years from now

In a stipulated set of driving scenarios (certain roads, time of day, traffic conditions, weather conditions etc.), the car can operate itself fully autonomously without human intervention ever. It is expected that we will see a good number of these types of vehicles on the road by 2025.

Level 5: The Ideal Automation System

I am not doubtful that I will see a Level 5 automated vehicle on the road within my lifetime, but they will appear only after many millions of Level 4 cars have been on the roads for a couple decades. The software systems that will be required to back Level 5 vehicles are more complex than anything human beings have ever built before. They will require incredible innovations in computer vision, machine learning, automated behavior planning, and sensing technology. Such systems will require unprecedented reliability from the complex computer systems that will power their situational awareness and decision-making tiers.

Acceptable Level of Safety

Engineers of automatic systems will be required to determine at what point has a particular automatic driving system reached a Level 3 or Level 4 or even Level 5 level of autonomousness. These determinations will require the systems’ performance in unforeseen circumstances to be stratified into a score that will hopefully properly represent car performance. These systems will be required to respond to and handle many diverse problems in a social environment, while toeing the line between convenience and safety.

It is most likely the case that we will not accept the “human-level” performance from our automatic vehicles. I fully expect the political tension and public outcry to demand superhuman levels of safety.

What We Should Expect in the Early Days

Because of the many issues that I have presented with regards to the issues arising in each of the several levels of automation tech, we should expect extreme timidity in the first several generations of self-driving vehicles. Cars will be hard-wired to drive more cautiously (read: frustratingly) than your grandmother.

A plastic bag fluttering across the highway will trigger hard braking attempts. I predict that pedestrians will aggressively jaywalk in front of driverless vehicles, especially if one can easily discern a self-driving vehicle from a human-operated vehicle.

The age-old question of “Standard or Manual?” will stop referring the transmission, but rather if one’s car is capable of automatic operation or not.

Political Pushback

While over 30,000 fatalities occur in the US each year due to car wrecks, fatalities caused by cars operating in automatic mode will spawn many media firestorms and protests. Technophobia spawned by the capricious release of underdeveloped safety systems has the potential to seriously delay the integration of automatic vehicles into our roadways.

Buggy firmware updates will cause overnight spikes in driver and pedestrian fatalities. Lawsuits will be filed against the software engineers and engineering managers that ok’d the roll-out of those updates.

While I am optimistic that these things will not stop the widespread adoption of such ground-breaking technology, I do believe that these events will happen. If our entrepreneurs and our technologists are not prepared for these realities or if public opinion can be swayed against this technology, then we will have a long and slow period of recovery.

Egotistical Driver Action

Where parking is hard to find, automatic cars will be sent to circle the block while the owner runs into the store to grab something, thus increasing road-way congestion with entirely empty vehicles. There will be incredible legal tension whenever such a car is involved in a fatal collision with a pedestrian.

Decrease in Car Wrecks and Vehicle-Related Fatalities

This one should be obvious simply from the positive results that Tesla has already been receiving. The heightened reaction time of even imperfect automation systems will save many lives. Inevitably scenarios will arise in which a human operator would have almost certainly have been able to avoid a collision / fatality, but I predict that on the whole, collision rates will drop considerably as driverless cars hit the road in droves.

Even if automatic driving systems are unable to avoid a collision completely, the reaction time of such a system does not deteriorate as a human’s does the as acclimatization to driving does in humans. Thus, I hypothesize that many fatal accidents will be converted into accidents with minor injuries to both or one party through system intervention.

Even though the system prevented fatalities, I foresee these accidents being viewed as accidents caused by the self-driving cars instead of fatalities prevented by the self-driving cars. This is a dangerous result of improper framing.

Eventual Long-term Effects

After several generations of self-driving cars, the rough corners will be rounded off and the lives of many will be improved (and prolonged) by self-driving vehicles. They will become more assertive; able to correctly toe the line of aggressive driving that leads to increased roadway throughput and safety.

Improved Accident Avoidance Systems

Accidents that occur due to drivers falling asleep at the wheel can be pre-emptively detected and avoided by monitoring the internal conditions of the vehicle.

When developed to maturity, radar systems should allow for many accidents induced through heavy fog to become a thing of the past.

Additionally, I imagine that emergency actions taken in adverse weather conditions such as snow or rain could quickly surpass the efficacy of that of the average driver. Correctly identifying the weather conditions and selecting the correct course of action will take many years of research, but when developed to maturity, such systems will easily out-perform humans.

The Simplifications are Worth it

The obvious obstacles to roadways dominated by automatic vehicles are mostly technical ones, but I’m honestly much more concerned with the consequences of a poorly informed public emotionally reacting to the development of an earth-changing new technology.

The problem of introducing driverless cars to society will involve a wide variety of growing pains, as we all seek to understand the implications of a new transportation paradigm. We need to spend an adequate amount of energy properly engineering the release of the technology to prevent the plethora of negative reactions from affecting society.

The motivation for this article is to attempt to inform members of the public about the challenges that need to be solved and the potential consequences for buying products that are promising more than they can deliver. Because, if we can keep the unwashed masses skeptical enough of the benefits of a level 2.5 car, then I think there is less chance at a poorly timed lawsuit stifling further technological innovation.

I want to encourage us all to have more conversations about what the future will look like when these vehicles begin to hit the road. Humans will inevitably adapt, but we want to make sure that the natural adaptations will not lead to an increase in recklessness or non-altruistic behavior in the short-term before we can reap the long-term simplifications to our economic system.

Automatic Cars are Definitely the Future

We have many glorious moments of technological development awaiting us in the future. We must ensure that we properly validate these systems before we allow the investors to push immature products to market, as they could stifle our future innovation.

We have great things to look forward too. Let’s ensure that we plot a thoughtful way around the many obstacles that still lie in our path.

--

--

Timothy Blumberg

Eternal Learner and Programmer; Communicates poorly in Chinese; Working on Crypto.