With all the promises and hype, It still seems unlikely to see AI driven driverless cars (AV) as mass substitution for human-driven cars in near or perhaps in far future, short of specialty niches in application for elderly, young or disabled people. But why? What is the hold up?
There are many. The severe technical problems such as GPS spoofing, control system hacking, unreliable sensors in the strong wind, dust, snow, ice, cold, fog, i.e. 85% of driving situations everywhere except for California are all extremely difficult to mitigate if not fatal problems that make mass production of such cars on a grand scale as Google, Tesla and others are peddling to public, close to impossible.
Even one Japanese company that for last decade was developing the self-driving software, gave up and open up their software patents as a part of open source platform, asking independent developers to contribute, since they concluded that they are unable to provide “industrial strength” implementations of self driving software that complies with safety laws and are economically viable.
And hence Google and others are testing mostly in sunny California, some in Washington and other locations on hi-res pre-mapped routes, with built-into physical infrastructure sensors with an army of technicians to control it during tests and maintain it in good operating conditions, seemingly not addressing or failing to address any of those serious issues above, so far.
Google was a pioneer in the AV development, flushed with all the Billions $ of investor money unfortunately did not achieve any dramatic breakthroughs. Even in contrast to public announcements, safety issues could not be dramatically improved.
Although Google reported millions of miles driven, it is not clear how many miles were driven in truly autonomous mode and not intermittently assisted mode controlled from nearby technical vehicle.
Only recently in the bus crash, Google for the first time admitted its car was at least partly responsible. The computer and human driver assumed the bus would yield as the car moved around sandbags. Instead, the bus kept going and the AV hit its side. Google said it has updated its software.
While by itself this incident would not have perhaps such a sobering effect to AV developers and public, but a consequent death of tech savvy Tesla autopilot driver, Joshua D. Brown, on Williston, Florida highway, a collision in seemingly similar circumstances when “..a tractor-trailer rig made a left turn in front of the Tesla at an intersection of a divided highway where there was no traffic light..” and hit its side with Tesla windshield, decapitating driver, raises serious concerns about true safety of the AV software and reliability of sensors, radar, ultrasonic or SfM optical used by those vehicles.
What’s disturbing is reaction of CEO of Tesla Elon Musk who, after proudly pronouncing just four months before this tragic accident that Autopilot is “probably better than a person right now.” subsequently is blaming it on a driver who likely “misused” the autopilot system for AV function.
A very telling attitude of self-proclaimed billionaire technological guru to all little earthlings who are not as perfect as Musk’s creations.
Musk seems to backpedal from his recent bold assertions and now is trying to diminish “autopilot” system, just few month ago hailed as revolutionary technology, into mere better cruise control.
What’s even more disturbing is that prior Google incident was not investigated by police and Goggle reaction was limited to PR statement that clearly similar problem to that of Tesla autopilot has been fixed. Should we trust corporate pronouncements regarding safety of AV?
In about a dozen other crashes on city streets, Google blamed the human driver of the other vehicle, but no adequate police investigations were even conducted, or asked even if it matters. One must question true impact of massive introduction of AVs on accident or accident death rates because it seems to be much more strongly correlated to overall strength of economy and miles driven.
Even before introducing a single commercial AV into US traffic, as US DOT reported, U.S. traffic deaths have declined steadily for most of the past decade, from 43,510 in 2005 to 32,675 in 2014 due to decline of economic activity.
A Virginia Tech University study commissioned by Google found that the company’s autonomous cars crashed 3.2 times per million miles compared with 4.2 times for human drivers not a substantial improvement especially that the analysis was based on “controlled” runs and on a small sample of 7-year miles driven data, at least 30 times smaller than necessary assessment of human safety rate. A monumental difficulty added to already mounting problems.
As autonomous car with application of AI is nothing but robotic device hence, it is quite shocking that Google is abandoning its Boston Dynamics research company they just bought few years ago getting for it less than they paid for. Why did they abandon future robotics? Do they know something? May be they realized that it is just an empty Ponzi scheme, and will never work? Or they move it into their classified AV work for Pentagon comprising autonomous killing machines or similar.
So far no studies have been published that would demonstrate any statistically significant advantage of the AV in the real traffic in a sense such as shortening travel time on the same road in the same traffic while surrounded by human drivers on the same roads or reducing overall miles driven. Some studies even suggest that no decrease of travel time could be achieved at all unless massive investment in dedicated AV smart road infrastructure is made, although that would make human driving much safer as well.
As a matter of fact some other analysis and studies of autonomous traffic suggest that AV will substantially slow down traffic flow since it will not exceed even temporarily legal velocity, not to violate traffic laws, which commonly are factors that smooth out and increase traffic flow are often tolerated by highway law enforcement in rush hours.
Also as it was demonstrated, combination of unreliability and imprecision of environmental sensors and poor event prediction technology into deeper future (over minute or so), much more inferior to that of human caused driving algorithms to be extremely cautious and defensive, generally slowing down vehicle velocity.
Such a phenomenon is making an assertion of superiority of AV safety in comparison to human driver a moot point, since decrease of average speed of the traffic flow practically decreases expected accident rate anyway for all drivers. If AV effect would be slowing down traffic it would decrease all accidents but not because of intelligence of AVs but a general rule of slower traffic.
Another extremely serious problem is viability and durability of the sensors including LIDAR, LEDAR, RADAR, ultra-sounding. sonar and optical SfM technologies all with well documented weaknesses and propensity of phantom readings, picking up false reading from environment what any Radar detector user can attest to or those who use LiDAR mapping in bad weather conditions. Most radar technologies are prone to similar deficiencies. Also optical sensors are extremely vulnerable to bright-light due to specular reflection, dust, mud and strong wind, all of them easy damaged from mud, debris forcing system to request manual driving or it will cease driving and seek shelter for repairs when human driver would continue with caution.
Also already demonstrated by UT fatal flaw of GPS spoofing while military refused to allow access to secured GPS leaving autonomous driving dangerous to rely on GPS, that’s why AVs are designed to not entirely rely on GPS but instead use pre-acquired and updated hi-resolution mapping data along the testing routes. In any practical implementations, a sub-centimeter resolution geospatial data for all the US roads would be impossible to even obtain or load directly to a vehicle in a mass production of low-cost AVs and even in a commercial transportation setting, not to mention updating it while driving without ultra speed WIFI system deployed along the roads.
The WIFI/cellular spoofing is another potential, fatal problem that may cause AV to leave the road or hit a barrier. The Snowden NSA revelation disclosed that such a WIFI/cellular/GPS spoofing technologies and even specific, available on black market, tools, are in private hands and in application to AV could be readily used for criminal purposes such as assassination, or assault on a third party.
One think that must be kept in mind that any amount of even brilliant AI programming will never remedy physical laws of mechanics and gravity, which cannot be suspended. And hence, what is not taken into consideration is a so-called unpredictable third-party effect heavily depended on unknown conditions not only of the road but the other vehicle itself, which simple example is to breaking too strongly and being hit from behind by a human driver or reverse situation. How AV would avoid it, is unknown, providing no advantage in the very important safety category. The apologists for the technology would reject applying higher standard to AV vs human driven vehicles but in such a case what is the innovative edge about AV is they do not significantly improve safety.
Also there is viable question if AV would break traffic laws and vehicle laws in order to try to avoid deadly collision or some other clear advantage, although such a simulations of occasional breaking traffic laws by AV, when absolutely “necessary” are being conducted at Stanford University, although a question of certification of such a “traffic law violating” AV software by DOT remains a big unresolved question.
Another question is whether or not algorithm aimed to avoid collision is the one that we really want since in many cases selection of a target of inevitable collision may exacerbate or limit casualties under what we call a problem of sacrifice/self-sacrifice. A problem unresolved in human reality while non-existing in AI realm. How the AI would resolve question whether to hit a group of children or a wall when faced with such as alternative, common, bad choice, alternative in last stage of most any serious accident.
I am not saying that human is superior driver to a machine but that machine is not superior or even equal to human in such situations, so what the advantage to go with AV, public would have to decide.
After initial promises ten years ago to self-drive for average Joe by 2012 or 2015, seventh year of road testing, should tell us that something is not right at least in typical difficult US driving conditions with software algorithms unable to handle many road situations and failures of unreliable sensors producing phantom readings confusing system, preventing travel due to environmental conditions that would have been considered safe for most drivers and allowable by most of vehicle and traffic codes. Now after initial hype subsided and reality sinks in, most of cool heads of experts do not predict mass AV production before 2035 at the earliest.
They are facing similar sensors/algorithm malfunctions and system failures that so far prevented introduction of pilotless commercial airliners after 20 years of advanced drone technology and its relatively high incident rate, mostly due to unpredictability of equipment failure in varying weather conditions.
Lack of self-driving trains is also a manifestation of fundamental problem of control they encountered in railroad industry even if most of train-driving functions have been automatized already for decades.
Still train engineers are required by law to make ultimate decisions even in such an extremely closely controlled environment like railroad system.
The telling absence of leadership on driverless cars from German companies having all the expertise and money to invest, puts a fine point of feasibility on the whole project within overall concept of transportation system.
While in the same time Mercedes, BMW are very much involved in a project of augment road infrastructure [or dedicated roads/lanes] and safety with built-in variety of sensor in the pavement and above the road to track locations data and cars as a way to provide reliable force and momentum based data to driverless cars and relaying less on cars sensors in extremely difficult weather/road conditions in Germany, mostly they focus on augmented driving i.e. an partially instrument based driving but human driving not autonomous driving.
Unfortunately, costs of the industrial strength system are prohibitive so far and exceeds cost of public transportation, well-developed in Europe, that can get you almost anywhere in most efficient, cheaper and fastest way than any AV ever could.
So far those who push driverless cars failed to prove that self-driving car would provide any more safety or any quantifiable advantage or efficiency in real human-driven traffic situations, short of temporarily enabling texting and not driving but even this is questionable due to psychological reasons.
One of those reasons is a fact that either autonomous or quasi-autonomous car systems have one fatal flaw, they impair driver’s (active passenger) alertness by illusion that car drives itself that builds overtime false confidence in the AV system driving ability and hence a focus on the traffic conditions, and circumstances and tension of alertness decreases in a way similar to effects of drug or alcohol impairment resulting in inability to properly and adequately react to the situation of failed driving algorithms or sensors and to regain sufficient control fast enough to mitigate the situation, as Ford levels such a criticism of Tesla’s part time Autopilot.
In other words a human in most cases is unable to regain adequate control over the autonomous car when autonomous systems fail or when they demand human intervention.
Although many accidents may not be “fault” of driverless car AI system which claim is disputable in a broader sense of road safety, it does not justify its use when injures to a human occur and a robot car was unable to avoid it. Blaming humans for mistakes of AV denies even basic legitimacy and efficacy to autonomous driving systems.
Also shrinking economy, lower employment, and people moving from suburbs to the cities will obviously decrease miles traveled already down 55% from peak in 2007. And hence it will increase driving safety of human drivers due to less stressful driving on emptier roads and on smaller distances. As experience already proved, number of accidents could be massively reduced just by road infrastructure improvements and eliminating dangerous road segments where accidents are clustered or by implementing more AI based traffic flow control.
At this point most common outcome for a proud potential owner of self-driving car in real life conditions, difficult changing conditions of the road, would be to human-drive it or get stuck somewhere or stay home and call taxi.
So far AV is all hype and illusion for the purpose of pumping Google, Tesla and other Unicorns’ stock price while in fact a stated ultimate goal is practically unachievable as its authors imagined and sold it to the public.
On the top of it we have a legal mess, with attempts to give driverless systems some human legal standing (like corporate standing) in an eerie sci-fi connotations of human vs. machine superiority question. Not to mention legal and moral ramifications of life/death sacrifice algorithms run by machines.
Recent death on Florida road will be for sure a legal test of a concept of autonomous and strongly assisted driving regarding manufacturer responsibility for accident caused by “normal” functioning of car’s control systems in contrast to previous cases where only system malfunctions or deficiencies were subject of litigation.
What would be extreme impactful as a legal precedence is a ruling on question who was actually driving when the impact happened and not who theoretically was or should have been in control, opening up a Pandora box of weird world of machine ethics and corporate responsibility regarding their autonomous creations.
Another similar question relates to driving test equivalent, whether it would be just software certification or real test using real particular AV purchased by owner, with its intricacies, wear and tear etc. These are not trivial question or simple solutions since driving ability of AVs heavily depends on the local road infrastructure, that widely differs throughout the US and elsewhere and hence AV may be certified to drive in one location but not in another.
All of that complicates real life solutions and pushes AV application less as individual means of transportation but rather as a part of big and complex multi-layered transportation system.
Looking at even bigger picture and enormous development of digital communications including image/video and VR, many see the problem of transportation system as a part of wider human communication system where both systems are fused into one most efficient design when choices we will be facing will likely diminish specific need for travel and hence would push us more into direction of live remote communication, public transportation in variety of forms and urban planning to develop spatially concentrated small to medium communities.
There is no doubt that AV will find a niche in the overall transportation system mostly with applicability for augmented driving but it is in no way a panacea for mainstream transportation problems.
Even Baidu gives up on AVs software development and releases it to public of developers, unable to make sufficient progress on their own and hence challenges proprietary software of Tesla, Google and Apple, or Uber as well as few more developers in Japan and the US who threw a towel on the project IP.
Another Uber accident via ReCode: But, as we first reported, the company’s self-driving arm has seen little progress in the overall reliability of its autonomous systems. As of the beginning of March, safety drivers had to take back control an average of once per .8 miles.
The top line: Uber’s robot cars are steadily increasing the number of miles driven autonomously. But the figures on rider experience — defined as a combination of how many times drivers have to take over and how smoothly the car drives — are still showing little progress.
More Uber trouble of Self-Driving Mess:
While as we pointed out, a real danger of GPS spoofing and hacking of the AV control systems, has been proven already by certain tests and experiments, now after recent release from Wikileaks of the Vault 7 data we have dramatic confirmation of not such possibilities but true reality of such cyberwarfare systems already being implemented and possibly deployed by CIA for possible purpose of assassinations that is sadly one of CIA prerogatives. What’s worse that it looks like CIA has been hacked and now those tools are widely proliferated among hackers community and sold to whoever wants it for whatever purpose, and it ain’t benign. What even worse is that CIA never revealed the theft opening us all to more danger, from the agency charged with our security.
How the CIA dramatically increased proliferation risks?
In what is surely one of the most astounding intelligence own goals in living memory, the CIA structured its classification regime such that for the most market valuable part of “Vault 7” — the CIA’s weaponized malware (implants + zero days), Listening Posts (LP), and Command and Control (C2) systems — the agency has little legal recourse.
The CIA made these systems unclassified.
Hence they cannot claim classification use limitations. Why did they do that? Because it would be illegal to place classified software on the net. What!!!!
It is nothing that tremendous failure of competency of the CIA.
“The CIA’s Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smart phones. Infected phones can be instructed to send the CIA the user’s geolocation, audio and text communications as well as covertly activate the phone’s camera and microphone.”
First Tesla autopilot accident death.
First Tesla autopilot crash in China.
Another UBER publicity stunt. AV rides with a dedicated driver in it by September 2016.
Another fake breakthrough. AV in five years from Ford !!! With a caveat, an UBER like service, only in selected the well mapped cities, with special [expensive] maintenance infrastructure, and only in limited areas of downtown, all AVs owned and operated by Ford Motors in order to “solve” or rather eliminate a range limit problem and in the same time eliminating weather problem by not allowing booking in bad weather.
Googe AV “broke the law” who’s gonna pay a ticket?
Uber insists their cars are driven by “drivers” siting in drivers’ seat even if drivers do not touch anything at all. DMV disagrees.
DMV disagrees. Sending Uber cease and desists letter while Uber autonomous ” driver” run read light in San Francisco. See YT clip. CA DMV WINS! UBER OUT of San Francisco (to Arizona);
Google ditches AV from Goggle-X into separate WayMo entity. Why? What do they know.
Holly Shit. AVs are here and now in 2017!! WOW! Well, they are here alright in Paris finally dream came true. Really? These are just small electric buses a part of Paris PUBLIC TRANSPORTATION SYSTEM [not private cars] test on dedicated lanes [fenced] 400 feet long !!!!!!!!!!!!. No human driven vehicles or other AVs allowed on the bus lanes. Buses travel about 25 miles per hour maintaining few tens of feet average separation, all buses on the same gigabit WiFi network with antennas deployed along the line with sleuth of sensors embedded in the ground. Service stops ion fog or heavy rain or snow, typical fall and winter in Paris. Hardly an example for mass AVs revolution.
Another Tesla Autopilot Accident driving strait into a temporary barrier, only luckily no serious injuries, secondary collision avoided, unclear if due to AP intervention or driver intervention and other drivers’ acuity.