Current data = one accident. I have a hard time making any conclusions from one accident. The fact that there was a driver behind the wheel in this accident illustrates exactly why there are good reasons to automate the process so people can't screw it up by <for example> playing on their phones while operating a vehicle.
I don't know how much behind the scenes testing was done before the technology was allowed on the roads, but nobody is sending V1.0 out on public streets. I tend to believe that Uber had done enough testing to have a reasonable level confidence in their technology before sending it out on the road. The cars were in a real world environment with human backup, but the person responsible for maintaining control in this test screwed the pooch. That resulted in one of the types of accident accident automated cars will eventually prevent - distracted driving.
Whether automated vehicles will be safer than human guided vehicles is the big question. My opinion is that engaged drivers will be better drivers on the whole than automated cars until the technology is very mature, but the automated vehicles will be more adept than the impaired (whether by age, lack of experience, chemicals, or distraction). I also believe that if you can remove the bad (impaired) drivers from the mix, accident rates will drop disproportionately. As the technology improves, automated vehicles will be a better (safer, more efficient) choice for a bigger and bigger slice of the population and that one day, manual driving will be mostly for enjoyment, not for daily transportation.
Here’s the data (from
https://www.rita.dot.gov/bts/sites...ansportation_statistics/html/table_01_35.html)
Self driving cars have been on the roads, what, maybe 2-3 years? I’ll give it the benefit of the doubt and use data for 2 years only from the above. I’ll include the total highway miles for 2014 and 2015. I doubt this includes in town driving or short trips, but it will serve to illustrate the issue.
I’ll use scientific notation NeM which means N x 10 raised to the M power.
In 2014 and 2015, total highway miles driven = 6e12 (6 million million). In those 2 years, there were approximately 70,000 (7e4) vehicle fatalities. (
https://en.m.wikipedia.org/wiki/Motor_vehicle_fatality_rate_in_U.S._by_year)
So, deaths per mile for non automonous vehicles = 7e4/6e12 = 1.2e-8.
This reference
https://medium.com/waymo/waymo-reaches-5-million-self-driven-miles-61fba590fafe has Waymo’s numbers for miles driven. I’ve seen estimates for uber of 1 million. So Waymo (Google) + Uber = 6e6
Deaths per mile for autonomous vehicles = 1 / 6e6 = 1.7e-7
Odds of death in self-driving / non-self-driving = 1.7e-7/1.2e-8 = 14.
So, the data to date shows that there is a chance of death 14 times higher with the self driving vehicles. That is significant.
My strong suspicion is that since the number of total miles driven by self-driving cars is so low, the death rate will likely go up and not down as these cars are exposed to more real world situations. I strongly doubt these things have had to deal with icy roads, heavy fog, storms, etc. yet. What happens when the sensors are degraded and damaged due to dirt, rocks thrown up by the vehicle in front, hail, heavy rain, mud, etc?
And the driver behind the wheel didn’t even touch the controls, so that certainly doesn’t make the case for autonomy.
My concern isn’t whether Uber thought their cars should go on the road or not. It’s clear they thought so. They were wrong. I’m usually not a fan of a lot of heavy regulation, but I don’t want me or my family to be the guinea pigs for these folks’ software quality control testing. Get these things out of the public and back into the controlled testing environment until they can be _proven_ to be at least as safe as the normal vehicles. And the heavy burden of proof is on the vendors.
Sent from my iPad using Tapatalk