Recently, news has broken that there have been some car accidents involving some of Google’s and Delphi’s self-driving cars. In the past few years, these companies have been running autonomous prototypes around the Silicon Valley towns of Mountain View and Palo Alto. Each company has logged an incredible amount of miles for their self-driving cars (Google has run approximately 1.8 million autonomous miles while Delphi’s numbers aren’t known), and they have reported very few accidents with them all supposedly causing only minor property damage. Apparently, though, that may not necessarily be the case.
Last Thursday, California state officials have reported six accidents involving self-driving cars. To do so, they needed to reverse a policy which held reports of accidents involving self-driving cars confidential to the public. Of the six accidents, five were by Google owned Lexus SUVs outfitted for autonomous duty and one was by a Delphi owned Audi. In four of the five Google accidents, the cars were driving autonomously, while in the fifth, a driver took over when realizing an accident was imminent but was hit in the right rear by an Audi S6. Not much info on the Delphi accident, except that it was while waiting at an intersection, so it may not be the car’s fault.
Google has been assuring us for years that these autonomous vehicles have been running more safely and accident free than human operated cars over the same period of time. However, with four accidents happening within the span of approximately 100,000 miles, and the lack of necessity for Google to publicize its accidents, how many have there really been? It seems illogical that the largest cluster of accidents would occur at a time when autonomous driving technology is at its absolute peak. It stands to reason that there was a far higher accident rate years ago, but companies didn’t have to report them, as it was for prototype testing purposes, so we couldn’t call their bluffs.
These reports just go to show that autonomous driving is still a long way off. It’s very impressive when cars can park themselves and act as autonomous valets, a la the new BMW 7 Series, but actual long distance driving on public roads is far from being a reality. It seems as if there are just too many unknown variables for a car to drive itself completely though traffic dominated by unpredictable human drivers. However, we’re moving in the right direction with accident reports now being reported. This way, we can further gauge the real dangers of having cars pilot themselves down our public highways, instead of having Google be doing this in secret and us being none the wiser.
What if someone had gotten seriously injured by one of these autonomous vehicles? How do they explain that? I’m not against the idea of self-driving cars, but when it comes to what could be human lives at stake, it seems like a strange dynamic. If an autonomous vehicle senses an inevitable crash, whether it be with another car or veering into a tree to avoid the other car, how does it make that decision? If it veers into the tree it won’t hit the other car, therefore saving the lives of the people in it. But if it hits the tree, it almost certainly ends the lives of the people in its own car. How does it make that judgement? It’s just a weird moral dynamic that I don’t think we’re ready for just yet. But it will certainly be interesting to see how it all unfolds now that we will be getting more crash information.