As we have stated in other posts, we are concerned that “self-driving” (also known as “autonomous vehicle”) technology is not as sophisticated or as safe as some manufacturers would have you believe. Specifically, we believe that problems with object detection and object recognition are the “Achilles Heel” of autonomous vehicle technology.
In today’s post, the automobile accident injury lawyer at The Doan Law Firm will review a recent, but largely unknown, study suggesting that “autonomous” or “self-driving” vehicle technology still contains “bugs” that the automotive industry simply fails to mention.
A “driverless” or, more accurately, an autonomous vehicle is a vehicle that is capable of safely operating in all traffic conditions based solely on input received from its various sensors and its preprogrammed computer withoutrelying on the actions of a human driver. Although current autonomous vehicle technology is sophisticated, it still lacks the ability of the human brain in three critical areas:
We will take a look at these “must do’s” in the following sections.
Everyone is familiar with the phenomenon of perspective: large objects that are far away appear much smaller than they actually are. Although our brains “learned” to compensate for this optical illusion while we were very young, most computers do not grasp this concept and will “estimate” the distance to an object based on its apparent size. For autonomous driving software, this could lead to problems.
Most autonomous vehicle compensates for this problem by restricting the computer’s “vision” to the area covered by its sensors, such as its LIDAR (Light Detection and Ranging), ultrasound, and infrared sensors. Since this coverage usually cone-shaped and extends about 250 feet ahead of the vehicle, as far as the computer is concerned, anything outside that cone simply doesn’t exist and therefore will not be detected.
In the next section we will learn about the problems that can arise if the computer “sees” something but doesn’t “recognize” what it sees.
For the computer at the heart of an autonomous vehicle to “do its job,” it must know its position relative to other objects. It must also know what type of object it has detected and, based on information stored in its artificial intelligence (AI) database, decide what action to take. If an object is not detected, and then correctly identified, the autonomous driving software could make a potentially disastrous decision, as explained below.
Researchers at Georgia Tech recently demonstrated just such a problem when they were able to show that the most commonly-used AI “object recognition” programs could fail to correctly identify individuals with dark skin tones. For those with a background in mathematics and probability theory, the full text of that study, ” Predictive Inequity in Object Detection” is available on the arXiv.org website. For the rest of us, here is a summary of its findings:
At this point, we have learned two things about the computers used to make an autonomous vehicle function “as advertised:”
We can now take a look at one example of how the above could lead to a serious, if not fatal, accident.
Consider the following scenario:
Imagine a straight and level, two-lane highway, with no obstruction to vision other than a sign that reads “Caution School Bus Stop.” Two vehicles, one operating in its “autonomous / driverless” mode, are traveling at 60 mph and each vehicle is in its correct lane. For some reason, the human driver of the non-autonomous drifts into the oncoming lane when the vehicles are 300 feet apart. Unfortunately, the driver (like this man) of the “driverless” vehicle has decided to take a nap.
The autonomous vehicle’s sensors detect the oncoming vehicle when the two are 250 feet apart and sounds an alert. Since its driver (being asleep) does not quickly respond , the computer “defaults” to its pre-programmed instructions for such a scenario and moves to its right to avoid a head-on collision. Unfortunately, the computer did not recognize the two first-graders who were waiting on their school bus and both are killed.
In “real life,” all the events in this scenario would have taken place in 1.42 seconds (the time that it would take for two vehicles traveling at a combined speed of 120 mph to cover 250 feet). Modern computers are fast, but they aren’t fast enough to deal with all possible situations that may arise!”
This page has briefly discussed another of the shortcomings of “driverless” vehicles. Since there are many more problems that are known to exist in this “emerging” technology, we encourage you to visit this site often to learn more about the legal issues that are certain to arise as more and more autonomous vehicles take to our roadways.
"*" indicates required fields