Monday, October 14, 2019

Tesla: Its Strategy for Teaching Its Cars How to Drive Themselves


The vehicles in which we travel have always had known failure modes.  Whether cars or airplanes, there was never a guarantee that they were completely safe to ride in.  Manufacturers always had to weigh the cost and practicality of correcting a failure mode against the probability that such a failure would lead to loss of life.  Early on, when such probabilities were larger, producers would feel the heat when they made a poor choice.  As technology has evolved, probabilities of mechanical failure modes have become much smaller, to the point that they have become almost irrelevant in safety concerns compared to human or software errors. 

The next great improvement in transportation safety will involve removing error-prone humans from the system.  Aircraft, having a much simpler working environment, are well on their way to fully autonomous flying.  Automobiles, on the other hand, have a near endless variety of cases that must be dwelt with.  Humans are designed to be able to deal with all the possible situations that arise, but they are unreliable creatures who often fail to pay attention to their driving.  For safety and marketing reasons, numerous efforts are underway to create an autonomous driving capability for automobiles.

Manufacturers and regulators must again face the issues of “how safe is safe enough” as these technologies become available to the public.  Zachary R. Mider provided an interesting look into the audacious strategy being employed by Tesla in its development of an autonomous driving capability.  It appeared in Bloomberg Businessweek with a catchy title: Tesla’s Autopilot Could Save the Lives of Millions, But It Will Kill Some People First. 

The data tells us that almost 40,000 people are being killed each year in auto accidents in the US.  According to Mider, 94% of these are caused by driver error.  There have been a few instances of deaths associated with testing of autonomous vehicles.  The issue for developers and regulators is when are such technologies valuable enough to be released to the driving public.  There seems to be a somewhat timid approach by most organizations: retreating when an accident occurs and waiting until all bugs are out of their systems.  This is a prudent approach, but since human drivers are killing people at a hefty rate, when autonomous driving is not perfect but is better than humans the opportunity to save lives presents itself.  And there are a lot of lives at stake, to say nothing of the millions of injuries caused in the millions of accidents that occur every year.

One car manufacturer has decided to be aggressive in developing and utilizing autonomous capability as it becomes available.  Tesla provides a feature called Autopilot that will assume some of the responsibilities of controlling the car such as keeping the vehicle between lane markings and avoiding collisions.  It is far from a perfect system and it is not nearly a complete one.  Drivers are told to keep control of the vehicle by maintaining their hands on the steering wheel.  Notoriously, Tesla drivers use the technology and pursue other activities, such as taking a nap, while Autopilot does whatever it can do.  At least two deaths have been associated with use of Autopilot.  Tesla’s leader, Elon Musk, does not view a few deaths as a sign that Autopilot is too dangerous for use.  Given the behavior of drivers, an argument can be made that even relatively simple safety mechanisms will be lifesaving features.

Musk takes the view that the best way—and perhaps the only way—to develop an efficient system is by using it and learning from the experience gained.  Mider provides context using results from studies by Rand Corporation.

In a 2017 study for RAND Corp., researchers Nidhi Kalra and David Groves assessed 500 different what-if scenarios for the development of the technology. In most, the cost of waiting for almost-perfect driverless cars, compared with accepting ones that are only slightly safer than humans, was measured in tens of thousands of lives. “People who are waiting for this to be nearly perfect should appreciate that that’s not without costs,” says Kalra, a robotics expert who’s testified before Congress on driverless-car policy.”

“Key to her argument is an insight about how cars learn. We’re accustomed to thinking of code as a series of instructions written by a human programmer. That’s how most computers work, but not the ones that Tesla and other driverless-car developers are using. Recognizing a bicycle and then anticipating which way it’s going to go is just too complicated to boil down to a series of instructions. Instead, programmers use machine learning to train their software. They might show it thousands of photographs of different bikes, from various angles and in many contexts. They might also show it some motorcycles or unicycles, so it learns the difference. Over time, the machine works out its own rules for interpreting what it sees.”

This type of learning requires the system under development to gain lots of experience/data to learn from.  Mider tells us that while fatal accidents seem to be plentiful, they only occur about once for every 86 million miles driven.  Consequently, any attempt to assess an autonomous system relative to the average human driver would require an enormous amount of testing.

“In another Rand paper, Kalra estimates an autonomous car would have to travel 275 million failure-free miles to prove itself no more deadly than a human driver, a distance that would take 100 test cars more than 12 years of nonstop driving to cover.”

Musk has hundreds of thousands of cars out there producing data that can be used to upgrade the Autopilot system.  Lax restrictions on how Autopilot is used by drivers allows them to place their vehicles in situations where the capability to respond does not yet exist.  While this can be dangerous, it also provides a “learning experience” for the software.  Current users regularly receive software upgrades as development proceeds.  

“…Musk’s plan to simultaneously refine and test his rough draft, using regular customers on real roads as volunteer test pilots, doesn’t sound so crazy. In fact, there may be no way to achieve the safety gains of autonomy without exposing large numbers of motorists to the risk of death by robot. His decision to allow Autopilot to speed and to let it work on unapproved roads has a kind of logic, too. Every time a driver wrests control from the computer to avoid an accident, it’s a potential teachable moment—a chance for the software to learn what not to do. It’s a calculated risk, and it’s one that federal regulators, used to monitoring for mechanical defects, may be ill-prepared to assess.”

Musk thinks Tesla is on the right path and is making steady progress.  Before an audience of investors and analysts, Musk had some rather astounding predictions to make.

“Over the course of the 2 1/2-hour presentation, Musk pointed investors toward a new focus: building the first truly driverless car. Cars on the road today, he said, would be able to use Autopilot on local roads within months. By sometime in 2020 they’d no longer need human oversight and could begin earning money as drone taxis in their downtime.”

“’It’s financially insane to buy anything other than a Tesla,’ Musk said, throwing up his hands. ‘It will be like owning a horse in three years’.”

Musk’s enthusiasm has been known to get him into trouble at times.  Investors have been able to make money betting against him in the short term, but he usually delivers eventually.  And his long-term seems to be much shorter than anyone else’s.



No comments:

Post a Comment