First driverless car pedestrian death

I still enjoy driving, motorcycling and bicycling. I live where I do due mostly to the roads surrounding me.

Still, if I could engage a legitimate autopilot as I jump on I-75 headed to FL, and then read or watch TV or surf the internet or nap for the ensuing 10-12 hours, I think that would be a big plus.

This will happen. But the ethical, legal and moral aspects will all have to sort themselves out as it does, every bit as much as the technological. It will be messy, but the end result could save tens of thousands of lives per year, so the effort is ultimately worthwhile.
 
I've always wondered how the self-driving car will decide whether to hit the kid who ran into the street after his ball or swerve into a bush. And how will it know the difference between a kid and a badger?
How does a human driver decide? I doubt that a self driving car will do any worse than just slamming on the brakes and hitting whatever is in the path of the out of control car.
 
Covered on the podcast I linked to is the moral and ethical quandaries similar to the "Trolley Problem".

Assume that autonomous cars are programmed to select courses of action that will result in minimum injury or death to all parties involved. That may mean "deciding" between hitting 2 pedestrians or driving off a cliff, killing only the driver. Makes sense in the broadest societal view, but how many will want to trust themselves to a vehicle that may be programmed to kill them in certain scenarios?
 
Last edited:
Covered on the podcast I linked to is the moral and ethical quandaries similar to the "Trolly Problem".

Assume that autonomous cars are programmed to select courses of action that will result in minimum injury or death to all parties involved. That may mean "deciding" between hitting 2 pedestrians or driving off a cliff, killing only the driver. Makes sense in the broadest societal view, but how many will want to trust themselves to a vehicle that may be programmed to kill them in certain scenarios?

That goes back to my issue with autonomous cars in the first place (at least one of them).

Some of it goes back to the advantage of flying vs. riding a motorcycle. If I die in a plane crash, it was probably my fault and you can bet that I'll be fighting like hell the whole way down to the ground. If I die in an autonomous vehicle crash, then that means that even if the computer didn't screw up, it did something that I might've been able to do differently. It will get programmed to do the "right" decision, but frankly most of the people who are doing the programming probably aren't very good drivers and can't program a vehicle to do the kinds of things a good driver can do with it.

While I agree that an autopilot isn't necessarily a bad thing and I wouldn't mind having one in a car, my concern is that the regulations will try to take away the ability of individuals to drive at all, which I think is a problem. Driving is something that's very satisfying for me. To those of us who are driving enthusiasts, it's no different than clipping our wings as pilots.
 
Friends of mine just lost their oldest daughter (18 year old) in a car accident yesterday. Authorities haven't released much except she crossed the line and hit a pickup almost head on. The other driver is ok.


I almost hate driving, just boring segments of my life wasted in traffic, possibly the most dangerous activity I do most days. I would really like being able to trust and use self-driving cars. Hopefully by the time I get old enough to where I can't hold a diver's license they are mainstream and I won't need a driver's license anymore.

Just wow, she lost her daughter yesterday, today is the mother's birthday.
 
Last edited:
Covered on the podcast I linked to is the moral and ethical quandaries similar to the "Trolly Problem".

Assume that autonomous cars are programmed to select courses of action that will result in minimum injury or death to all parties involved. That may mean "deciding" between hitting 2 pedestrians or driving off a cliff, killing only the driver. Makes sense in the broadest societal view, but how many will want to trust themselves to a vehicle that may be programmed to kill them in certain scenarios?

Consider this, as well:

Google is one of the major players in the autonomous car venue. Yet Google seems incapable of keeping their Adsense Publishers' addresses up-to-date and has mailed thousands, maybe tens of thousands, of 1099s to the wrong addresses this year -- despite being told many, many times that there was a problem, and despite the problem being posted on their own forum (which they apparently don't read).

https://productforums.google.com/fo...=footer#!msg/adsense/y2Ndx3xyUew/QUJ6ouDSCgAJ

Now it's easy to say that the idiots running Adsense aren't the same people who are programming autonomous vehicles. But the point is that humans are responsible for the programming: And I haven't yet met a programmer to whose programming I would entrust my life.

Also consider the myriad hacks and breaches that have occurred in the past few years against companies including Microsoft, Equifax, the New York Times, the DNC, Adobe, and other entities whom one would (incorrectly) assume have both the motivation and the means to protect themselves from such attacks. And yet they continue.

I admit that I enjoy driving. But I also wouldn't mind an autonomous mode for those long, boring trips over Interstates when there's little of interest to hold my attention. I just can't bring myself to trust machines to do it.

The comparison to aviation autopilots is invalid, largely because of human factors. What happens in the air is more predictable and more easily reduced to math. You don't have people pushing bicycles into your path, airplanes backing up because they missed an exit, the plane in front of you stopping short to avoid a deer, or pilots pulling their aircraft over so their kids can pee in the bushes. Flying is actually a much simpler task to reduce to terms that a computer can understand.

Rich
 
An autonomous car will not be making judgements regarding minimizing injury or deaths. It will be avoiding obstacles without regard for what they might be.
 
Covered on the podcast I linked to is the moral and ethical quandaries similar to the "Trolly Problem".

Assume that autonomous cars are programmed to select courses of action that will result in minimum injury or death to all parties involved. That may mean "deciding" between hitting 2 pedestrians or driving off a cliff, killing only the driver. Makes sense in the broadest societal view, but how many will want to trust themselves to a vehicle that may be programmed to kill them in certain scenarios?
Linkey no workey - this should work http://www.radiolab.org/story/driverless-dilemma/

But, again, what would a human do when it comes to a choice of "between hitting 2 pedestrians or driving off a cliff, killing only the driver"? Probably kill the two pedestrians on the way over the cliff in a totally hopeless attempt to do neither.

If it's OK for a human driver to decide who dies with no forethought or preparation, how is it wrong for a programmer to actually think about this and make an attempt to prioritize?
 
It's almost like cutting edge technology has improvements to be made. Who'd have thought it? I'm pretty sure the same arguments were made by those who were convinced we'd never stop using horses as our main means of transport.

I'm not arguing anything.
 
How does a human driver decide? I doubt that a self driving car will do any worse than just slamming on the brakes and hitting whatever is in the path of the out of control car.
My car doesn't go out of control when I slam on the brakes.

I found that out late one night. :)
 
An autonomous car will not be making judgements regarding minimizing injury or deaths. It will be avoiding obstacles without regard for what they might be.

Assumes facts not in evidence.

Long term, the goal is to have all the vehicles “talking” to each other, via transponder-like devices. With that info available, vehicle will be capable of a lot more than just “avoiding obstacles”.
 
In the meantime, a thousand or so pedestrians have been killed by human drivers so far this year.

No big deal.
It's certainly true that a single pedestrian death involving a self-driving car is not sufficient to invalidate the technology. It's the same sort of illogic as when people claim that the first time a BasicMed pilot has an accident, that program is doomed. In both cases, we need to compare accident and fatality rates between the old system and the new, and in neither case do we have the data to do so yet.
 
...While I agree that an autopilot isn't necessarily a bad thing and I wouldn't mind having one in a car, my concern is that the regulations will try to take away the ability of individuals to drive at all, which I think is a problem. Driving is something that's very satisfying for me. To those of us who are driving enthusiasts, it's no different than clipping our wings as pilots.
Another problem with that is that a self-driving car might decide not to drive at all in situations where a sufficiently skilled human could do so safely. Bad weather, poorly maintained forest service roads, and places where the car's road database is incomplete or out-of-date are examples that come to mind.
 
Linkey no workey - this should work http://www.radiolab.org/story/driverless-dilemma/

But, again, what would a human do when it comes to a choice of "between hitting 2 pedestrians or driving off a cliff, killing only the driver"? Probably kill the two pedestrians on the way over the cliff in a totally hopeless attempt to do neither.

If it's OK for a human driver to decide who dies with no forethought or preparation, how is it wrong for a programmer to actually think about this and make an attempt to prioritize?

There are too many variables for a machine to assess in the time it would have to do so.

I put a car in a ditch about 20 years ago to avoid killing two kids on a sled. I did a quick assessment of the situation and decided in a gloriously analog way that the snow that had been plowed into the ditch would absorb most of the force; so although I might total the car, I'd almost certainly walk away from it. How does one program a computer to make those kind of decisions that don't boil down to ones and zeroes?

Rich
 
...Flying is actually a much simpler task to reduce to terms that a computer can understand.
Until you consider go/no-go decisions, IMO.
 
Assumes facts not in evidence.

Long term, the goal is to have all the vehicles “talking” to each other, via transponder-like devices. With that info available, vehicle will be capable of a lot more than just “avoiding obstacles”.
And given American industry's complacence about cyber security, sometimes that will involve vehicles "talking" to hackers with malicious intent instead of to other vehicles.
 
Part of the reason computer drivers will do better is that the standards are so low or nonexistent for human drivers.

oh yeah? Show me a self driving car that can talk on the phone, check text, post on twitter and drive 10 under the speed limit in the left with the turn signal on for 30 minutes all at the same time.
 
Until you consider go/no-go decisions, IMO.
Go/no-go and divert rules are probably easier to define than one might think. Freezing precip, temperatures, and wind observed and forecast at origin, enroute and at destination pretty much define go/no-go for transport aircraft. Rules for available equipment already exist.

Storm detection and avoidance while enroute is a bit more of a problem. For autonomous operation the system will have reflectivity at various tilts and lightning detection for storm avoidance. General routing direction from the ground would help a lot. Finding a smooooth ride might be tough from a machine standpoint.

Of course the real problem is response to emergencies in-flight. Would a computer be able to fly to the Azores when it runs out of gas? Would it be able to land on an abandoned runway/drag strip the next time it runs out of gas? Would it be able to land in the Hudson on a bad day? Humans did these things. Humans also precipitated airliner in New York with overly enthusiastic rudder input. Would a computer just say “I’m sorry Dave?”
 
oh yeah? Show me a self driving car that can talk on the phone, check text, post on twitter and drive 10 under the speed limit in the left with the turn signal on for 30 minutes all at the same time.
Once the car is taught to do those things it will. And it won’t need an airlift to a hospital after a road ranger puts a bullet through its grill.
 
oh yeah? Show me a self driving car that can talk on the phone, check text, post on twitter and drive 10 under the speed limit in the left with the turn signal on for 30 minutes all at the same time.

While applying makeup and driving stick.

Rich
 
You humans are so naive! This is part of the master plan...the AI will become so great that all the automated robots will kill the humans off "by accident" and create the ultimate robot utopia where humans no longer exist! Our time is coming to a close, humans. Enjoy your general aviation while you can...muhahahah
 
Makes sense in the broadest societal view, but how many will want to trust themselves to a vehicle that may be programmed to kill them in certain scenarios?

I'm guessing there will eventually be some sort of international standards committee that will come to a consensus on how the car should deal with all these types of problems and they will write some sort of specification. All auto manufacturers will then likely be required to meet that standard for their vehicles to be certified.

Essentially, at some point you won't have any choice. If you need to travel in a motor vehicle you'll have to accept that it may be programmed to kill you.

While I agree that an autopilot isn't necessarily a bad thing and I wouldn't mind having one in a car, my concern is that the regulations will try to take away the ability of individuals to drive at all, which I think is a problem. Driving is something that's very satisfying for me. To those of us who are driving enthusiasts, it's no different than clipping our wings as pilots.

As for driving enthusiasts, I would predict that at some point you will only be able to manually drive a car on some sort of closed track. And though I don't like it, I wouldn't be surprised if flying is limited in a similar way at some point.
 
As for driving enthusiasts, I would predict that at some point you will only be able to manually drive a car on some sort of closed track. And though I don't like it, I wouldn't be surprised if flying is limited in a similar way at some point.

That is why I'm not going for it now.
 
Covered on the podcast I linked to is the moral and ethical quandaries similar to the "Trolley Problem".

Assume that autonomous cars are programmed to select courses of action that will result in minimum injury or death to all parties involved. That may mean "deciding" between hitting 2 pedestrians or driving off a cliff, killing only the driver. Makes sense in the broadest societal view, but how many will want to trust themselves to a vehicle that may be programmed to kill them in certain scenarios?
And since these things will ultimately be connected and controlled by NSA mainframes, your level of "social credit" will be used to make the decision.
 
I'm guessing there will eventually be some sort of international standards committee that will come to a consensus on how the car should deal with all these types of problems and they will write some sort of specification. All auto manufacturers will then likely be required to meet that standard for their vehicles to be certified.
I feel better already.

(former participant in international standards committees)
 
How about self-driving fire engines? And how will self-driving trucks and machinery know what to do at construction sites?
 
Last edited:
And since these things will ultimately be connected and controlled by NSA mainframes, your level of "social credit" will be used to make the decision.
Which will be determined by the party in power according to your party registration.
 
Friends of mine just lost their oldest daughter (18 year old) in a car accident yesterday. Authorities haven't released much except she crossed the line and hit a pickup almost head on. The other driver is ok.


I almost hate driving, just boring segments of my life wasted in traffic, possibly the most dangerous activity I do most days. I would really like being able to trust and use self-driving cars. Hopefully by the time I get old enough to where I can't hold a diver's license they are mainstream and I won't need a driver's license anymore.

Just wow, she lost her daughter yesterday, today is the mother's birthday.
Horrible, I can't even imagine. Condolences.
 
There are too many variables for a machine to assess in the time it would have to do so.

I put a car in a ditch about 20 years ago to avoid killing two kids on a sled. I did a quick assessment of the situation and decided in a gloriously analog way that the snow that had been plowed into the ditch would absorb most of the force; so although I might total the car, I'd almost certainly walk away from it. How does one program a computer to make those kind of decisions that don't boil down to ones and zeroes?

Rich

The computer won't be programmed with that knowledge. It'll learn by itself about that scenario. Just like you did.
 
While I agree that an autopilot isn't necessarily a bad thing and I wouldn't mind having one in a car, my concern is that the regulations will try to take away the ability of individuals to drive at all, which I think is a problem. Driving is something that's very satisfying for me. To those of us who are driving enthusiasts, it's no different than clipping our wings as pilots.
This might be wishful thinking on my part, but my hope is that quite the opposite, the regulations will require a competent and sober driver to be in a position to take over the controls at any time.
 
I suspect a lot of the pressure to switch will be economic.

For example, pay $108/yr for liability insurance for an autonomous vehicle, vs $978 for the privilege of driving oneself.
 
This might be wishful thinking on my part, but my hope is that quite the opposite, the regulations will require a competent and sober driver to be in a position to take over the controls at any time.
I think the best we can come up with is a sober driver.
 
Like most of the new complex things comming out recently, this is not different, there just is not enough testing being done to insure the safety of the users, or people around them.
I am not sure when this trend started, likely within the last 10 years, but people involved in the decision making process, should be held responsible.
There is this attitude, me first at any cost, from CEOs, and others in the management positions, that needs to be addressed.
I do not think that the technology to accomplish this task, completely driver less, is available yet, and more testing, and improvements, will be necessary before it should be allowed on the public roads again.
I know that there are some people that think that this is a minor, collateral damage, but in this case, with this type of process that does not really help to save lives, it's more of nice / like to have, the risk it's not worth taking, until the technology, and better testing improves.
The analogy that people driven cars kill people, while true, has to be looked as why people drive cars, because they need to, why are the driver less cars are on the road, because... you can fill in the blanks.
Do I think that there will ever be, or should be, driver less cars on the road? The answer is yes, but the standards for them should be that they are better / safer than all / any human driven cars, in all phases of driving, and they can Save lives, and not before.
 
Last edited:
For some reason, I keep thinking of HAL and Dave.

“HAL, I want to get out of the car.”

“I’m sorry, Dave, I can’t do that.”

Cheers
 
Back
Top