The more I think about it, the more I think that this autonomous car thing is much farther off than a lot of people think/hope.
The one thing I can’t reconcile is if computers/AI will ever be able to replicate the human mind when it comes to prediction. Most of us have been in a situation when all our previous life experiences culminate into a “spider sense” of sorts where we can predict the actions of others just by observing them. We’ve all been on the road, passing a cyclist or even another car on the road and thought “this guy doesn’t even know I’m here” and slow down based on the fact that the inattentive cyclist “looks” or behaves a certain way. Or we see kids playing in a yard and we slow down because we know that a ball may get kicked into the road. I think no matter how good the AI becomes, it’s not going to be able to “read” a fellow driver’s actions before they even happen, or see a scenario unfolding and base its driving on things that might happen (kid running into the street).
To this specific accident, I’ve lived in Tempe long ago. I don’t know what road this is on, but most of these four lane roads in Tempe are pretty well lit, even if someone is crossing between the light posts, I can’t believe that an attentive driver wouldn’t have seen this pedestrian as she was crossing the other three lanes. I would also think that an attentive driver would have braked hard, swerved and laid on the horn, perhaps warning the pedestrian to stop, or jump back out of the way, perhaps getting injured, but maybe not fatally.
I work on systems (complex and with realtime input) first as a technician later as programmer.
I also believe we are LONG ways from the goal. May even never get there.
There are ALL kinds of issues. From sensors, (that have to work flawlessly, OR catch any instances of failure and "shut down" safely so that only manual human operation until fixed) that will have to work in all kinds of weather, snow, rain, dirt, grime, temperatures, etc. to maintenance (much looser of course than aircraft maintenance) to a HUGE issue with the firmware and software (updates, patches for bug fixes, etc.) where in a complex system it is VERY hard (impossible really) to thoroughly test for all conditions. All the conditions and hazards on roads, etc. dealt with correctly.
Also the built in reactions, there is a possibility of other manual drivers finding or exploiting those to cut people off, sneak in queue, etc.
Laws (who gets a ticket if you self-driving car makes a mistake?, can a person drink and sit in the LH seat?)
I've seen the state of software engineering decay. I see it in releases from the biggest companies on down. Bugs, vulnerability to hacking, **** poor design that obviously is from not enough planning or understanding of how the software should work, bad planning.
Many get hung up on if the woman was "at fault". It makes no difference. This is real life, and the situations like these happen. Question is if the average driver would have avoided killing her. Same thing with other drivers, they don't always act rationally, which seems to be a point for automation but in fact is not necessarily so, I think so far humans are better at identifying and avoiding "crazy drivers" (I'm always wary and trust no other car to do the right thing) and processing complex information. Computers are faster. But it all comes down to the sophistication of the programming. And there we end up programming such complex interaction of software that humans cannot grasp it all and can make a minor "adjustment" with unforseen effects.
And all testing of such things is bound to be spotty and not thorough, it cannot be.