A self driving Tesla automobile was involved in a highway accident with a Big Rig in May which resulted in the death of the driver / passenger. Under bright daylight conditions, the Robocar failed to recognize a white semi which crossed lanes as a truck to avoid and attempted to drive under it instead of braking to avoid the crash. This was the first autonomous vehicle related fatal accident that we know of and we only know about it because a federal agency started an investigation. Tesla, in a statement on it's web site, is blaming the victim, saying that it is the responsibility of the passenger to become the driver and take over when an accident is impending and that the Robocar was never designed to be fully autonomous.
"It is important to note that Tesla disables Autopilot by default and
requires explicit acknowledgement that the system is new technology and
still in a public beta phase before it can be enabled. When drivers
activate Autopilot, the acknowledgment box explains, among other things,
that Autopilot 'is an assist feature that requires you to keep your
hands on the steering wheel at all times,' and that 'you need to
maintain control and responsibility for your vehicle' while using it."
This is in stark contrast to a statement made previously by the company's CEO, Elon Musk:
“The probability of having an accident is 50% lower if
you have Autopilot on. Even with our first version. So we can see
basically what’s the average number of kilometers to an accident –
accident defined by airbag deployment. Even with this early version,
it’s almost twice as good as a person.” He also said “It’s probably better than a person right now” and it “will be able to drive virtually all roads at a safety level significantly better than humans."
So why are we just finding out about this now? Because the Robocar lobby will obviously take a big hit because of this. All along they have been extolling the benefits to society of self driving cars. Computers are so much better than humans, they claim, that they will bring an end to accidents altogether and make our highways safer for travel. No more fatalities, they declared. They have even argued that the government should take the human out of the equation altogether by not allowing any human to drive ever again.
This accident changes things. It should lead us to question the head on rush to approval of autonomous vehicles on our streets. The argument from Google and other Robocar proponents has always been that self-driving cars or autonomous vehicles are safer than having people behind the wheel. "Think of all the lives we could save", they say. "People need never die in a car ever again", they proclaim. Well someone just died at the virtual hands of an autonomous vehicle. The software failed and the company is blaming the human. If he had his hands on the wheel, if he was in control, the accident could have been avoided, they're now saying.
The problem with this is that we are trained to think differently in different situations. We have alert driver mode and relaxed passenger mode when we are in cars. We are used to it, we have always experienced it that way and it is difficult to just switch back and forth in an instant. We can't be both driver and passenger, our brains don't work like that. So to say that it was the passenger's fault is naive at best. The human in the Tesla was a passenger who trusted the technology because he believed all the hype and he paid for that with his life. There is a difference between a software glitch in autocorrect that results in a misspelling and one in a Robocar that can result in death. But the hype machines of Google, Tesla and other autonomous car proponents haven't been treating it like that. To them it's just another piece of tech to push.
The spin doctors are already working overtime on this because it could set the Robocar industry back years. But this time, let's not believe the hype and let's ignore the spin. Do we really trust Google or Tesla or other manufacturers to tell the truth about this? And can we really trust having autonomous vehicles on our roads?
Dr Tim Lynch
Psychsoftpc
Psychsoftpc Site
www.psychsoftpc.com
Twitter
@Psychsoftpc
Facebook
Psychsoftpc
Computers Built in the USA With Traditional Massachusetts Craftsmanship by Psychsoftpc. Psychsoftpc makes high performance
Virtual Reality ready 4K Gaming Computers, GPU Tesla Personal Supercomputers, Graphics Workstations and turn key Big Data Hadoop Clusters in Quincy, MA Computers Built in the United States With Traditional Massachusetts Craftsmanship by Psychsoftpc of Quincy, MA USA Psychsoftpc also sells 3D Printers
Dr. Tim Lynch, President of Psychsoftpc, received his Ph.D. in Psychology of Computers and Intelligent Machines from Boston University. Shortly thereafter, Omni Magazine named him the first Robopsychologist or Computer Psychologist. He was then written up as a computer psychologist, or psychologist who studies how computer interaction effects personality and how to make computer interfaces more user friendly, in the Wall Street Journal, Psychology Today, the New York Times, the Washington Post, the Atlanta Journal and Constitution, the London Sunday Times, Computer World, and many other publications. As part of his Doctoral Dissertation on the Effects of Computer Use on Personality and Social Interaction Patterns, he created Artificial Intelligence Natural Language software which was the basis of programs used by NIH division of AIDS Research and the United Nations, among others. Dr. Lynch was an editor for the first Journal of Psychology of Computers. He taught graduate level courses and has written numerous journal articles on Artificial Intelligence, Ethics in computer science, Psychology of computers and how interacting with computers and intelligent machines effects people.