Autonomous cars are coming, but they are far from perfect at the moment. The biggest problem with autonomous cars is the way they act amongst human drivers. Autonomous cars are great at following the rules of the road to perfection and, under perfect circumstances. However, when driving on the road, circumstances are rarely anything resembling perfect. Humans cut corners, both literally and metaphorically, they only creep through stop signs as opposed to fully stopping and they overall just act unpredictably. Humans drive in such an unmachine-like way that machines will have a very difficult time conforming to it.

Well, Google is apparently attempting to rectify this by making its autonomous cars drive more like humans. This has been a dilemma since the dawn of artificial intelligence, how to make it more human-like. It’s damn near impossible, as humans are incredibly complex creatures and there seems to be no rhyme or reason as to how or why they think what they think. It’s just not possible to program a machine to mirror the thought process of a human, as humans can react via instinct, intuition and emotion. Machines are ones and zeros and must be programmed to react to certain events. Can machines learn, of course they can. But not nearly in the same manner as humans can.

Google is doing some excellent work, though, as well as other companies. Google has been sending its autonomous cars out, covering literally millions of miles, and recording all of it. So all of the strange reactions the driverless cars make, Google documents and makes little tweaks and changes to what they do in certain situations. For instance, if a parked car impedes into the road too much for the car to pass without crossing the double yellow lines in the middle of the road, a human would normally just cross the double yellow a bit to get around the parked car. The Google car, however, was originally programed to never cross the double yellow under any circumstances, but in that particular circumstance, the Google car just stopped dead in the middle of the road indefinitely. So Google had to go and change this, so it could react more like a human would.

That’s the issue with autonomous cars, though. No matter how clever programming is or how many situations are programmed into the car, it won’t be enough. There are an infinite number of ridiculous situations that can be caused by erratic human behavior that computers won’t be able to react properly to. And because machines can’t judge such a thing with the same level of problem solving a human can, it may react in a way that will do more harm than good. Obviously humans can react poorly as well, and often do, but the likeliness of a human reacting properly is more so than a lifeless machine. Machines can’t judge a situation based on morals, either.

Google is doing some amazing work, but the entire idea might be flawed. Maybe one day, machines will be able to think in a more human-like way than just a series of binary. But at the moment, it seems that humans and machines are not meant to cohabitate on the roads. Autonomous cars are actually excellent drivers and can even lap racetracks better than humans, but humans just react too unpredictably for machines to be able to always respond to properly. You can’t blame Google for trying, though, as the work must be done to lead to a better future, but until something changes, it’s not likely these machines will every be able to drive around our roads as human-like as we’d want.

[Source: Quattro Daily]