Trusting AI With Life or Death Decisions

Palmpilot

Touchdown! Greaser!
PoA Supporter
Joined
Apr 1, 2007
Messages
22,755
Location
PUDBY
Display Name

Display name:
Richard Palm
There's been a lot of discussion of when and whether artificial intelligence will be able to safely fly autonomous airliners, but here's a leading technology pioneer advocating a ban on allowing AI to make a different kind of life or death decision, i.e., "Does this person need killing?"

Elon Musk backs call for global ban on killer robots
 
I'm generally of the opinion that if a technology can be built, someone is inevitably going to build it regardless of any bans.
 
I'm generally of the opinion that if a technology can be built, someone is inevitably going to build it regardless of any bans.
I'm afraid you're right.
 
Also agree you are right. AI can't even reliably run my household appliances. My distrust is deep.
 
Current state of the art in software development (including AI and machine learning) isn't there yet; all software still sucks; we aren't cranking out significantly more reliable or trustworthy systems than we were a few decades back.

IMHO, there is a breakthrough that has to happen, or, less dramatically, a critical mass has to coalesce - applied science has to catch up, integrate a lot of newer, better tech into a trustworthy whole. Outside the pretty GUI's, the essentially pre-written code, and much better networking, we just aren't doing things in a radically more advanced way then we were a while back. Things moved fast, for sure, for quite a while, so we built on some shaky foundations; OS's, network topologys, development environments, data stores, which are good enough for running e-Commerce or counting beans or doing CAD/CAM or operating test equipment or driving cell phones. But not for handling unanticipated outlier situations - basically none of the foundational stuff is super-solid, or not solid enough to trust for managing non-routine, unpredictable situations.

I think (and I could be wrong) that a new tech/development method/platform has to evolve (or burst on the scene) before true autonomy is practical.

Musk is Musk, a cool guy, all that - personally, having a family member in military aviation, I'm all for autonomous systems seeking out, engaging, and killing the enemy. Regularly and repeatedly. Not so much for flying Grandma to Vegas. . .
 
IMHO, there is a breakthrough that has to happen, or, less dramatically, a critical mass has to coalesce - applied science has to catch up, integrate a lot of newer, better tech into a trustworthy whole. Outside the pretty GUI's, the essentially pre-written code, and much better networking, we just aren't doing things in a radically more advanced way then we were a while back.

There is a difference. Neural networks and deep learning. I agree if we have to write code then we're no better off than we're in the 80s - and we'll probably be very little better off 100 years from now. But AI doesn't have to be coded - it learns by itself.

Take a look at this:

The difference between the first part and second part has no additional programming to it. It's just self-learning for 3000 miles. Now multiply that by a million cars driving 10'000 miles each per year, and you can see how that's a game changer.

Of course that is not quite a paradigm you can extend to a military machine, you can't just go around and shoot people to learn from that which ones are friendly. Although, that is actually how much of humanity learned it (and still do), but I don't think we'll accept that from machines.
 
I think we're patting AI on the back too much - IMHO, it "aggregates" very well, but for applying "learning" to new or outlier situations - it still sucks. . .
 
I think it depends on the Al.

This Al is probably pretty dependable:
1011_Flannel_7.png


This one not so much:
albundy2.jpg
 
I think it depends on the Al.

This Al is probably pretty dependable:
1011_Flannel_7.png


This one not so much:
albundy2.jpg
If you don't trust the AI then fall back on the TC or go VFR.
 
Can the A.I. car learn enough that it can handle situations that were unanticipated and not previously encountered?
 
Yes. There are many examples in aviation. UA232 and US1549 are two examples.

UA232 would probably be better handled by an AI engine anyway. Take a look at 6:40 into this video:
 
Aviation Week is predicting helicopters by mid 2030s. I'll be retiring around then so I've got no problems with the AI wave that's coming.
 
UA232 would probably be better handled by an AI engine anyway.
Of course, because we now have the experience from UA232 to know that we need to program the AI for similar problems. The possibility of the loss of all primary flight controls was not considered prior to that accident.

The AI needs to figure things out that haven't previously been imagined as the crew of UA232 was able to do. An AI driving a car, or other ground vehicle, can stop if a situation exceeds its programming. An AI flying an airplane can not.
 
Of course, because we now have the experience from UA232 to know that we need to program the AI for similar problems. The possibility of the loss of all primary flight controls was not considered prior to that accident.

The AI needs to figure things out that haven't previously been imagined as the crew of UA232 was able to do. An AI driving a car, or other ground vehicle, can stop if a situation exceeds its programming. An AI flying an airplane can not.

That's exactly NOT AI. An AI system doesn't get pre-programmed for a specific purpose. To teach it in the first place, you give the system access to a simulator - and assign goals and cost, and then you tell the AI to minimize cost while still reaching the goal. You don't specifically teach the system how to fly the airplane, or even that it is an airplane - it learns it through doing, with continuous feedback and adjustment. So when you take away a control surface, like in the case of the drone video above with the props damaged, it can learn another way to fly using whatever controls and control surfaces are remaining.

AI can actually do better at humans with sudden unexpected scenarios. Let's say there was an explosive decompression and you only have your right control surfaces operational, an AI can quickly run a few thousand "hours" of simulator training up there by itself in a couple of seconds, to teach itself the best way to fly the airplane with whatever is remaining of it. A human can't do that. We have to learn in real time - AI doesn't.

The technology isn't quite there yet. The NVidia PX2 stuff that Tesla is using is starting to be pretty cool though. Take a look here - the system learns how to recognize cars and people without being programmed to do so - it was just shown a bunch of pictures of cars and people, and now it can recognize them:

 
That's exactly NOT AI. An AI system doesn't get pre-programmed for a specific purpose.
You're the one that posted the video of the autonomous car learning. It didn't do very well on its first attempt at a new situation. It needed experience to learn how to handle the situations. My question remains. How does the AI handle situations that were unanticipated and not previously encountered?
 
You're the one that posted the video of the autonomous car learning. It didn't do very well on its first attempt at a new situation. It needed experience to learn how to handle the situations. My question remains. How does the AI handle situations that were unanticipated and not previously encountered?

Yeah ok, but in that case the new situation was "driving and staying on roads". No human being born are intrinsically capable of driving a car or flying an airplane either.

In the case of UA232 - if you put the smartest Fields Medal winner in the world behind the yoke of a DC-10 and tell him to go fly without any usable control surfaces, and he has never flown an airplane before, I don't care how smart that person is, that outcome would not have been remotely comparable to the pilot with 30'000 hours of experience.

Humans need experience before they can handle unanticipated situations. Same with AI.

It's not like a Tesla autopilot drive around because they know where the roads are and know where all possible other vehicles are going to be. They don't. Every second that it drives it is encountering a previously unanticipated scenario that it has to deal with. Nothing is pre-programmed. It doesn't even use the cars built-in very detailed maps - it just figures our roads as it encounters them. Of course not perfectly yet, but you can see what's coming.
 
Humans need experience before they can handle unanticipated situations. Same with AI.
So you ARE saying that an AI can handle unanticipated situations for which is has no programming nor prior data? Is there evidence of this or just science fiction?
 
So you ARE saying that an AI can handle unanticipated situations for which is has no programming nor prior data? Is there evidence of this or just science fiction?

You're looking at it from a wrong point of view. Really the ONLY purpose of an AI is to handle unanticipated scenarios. You don't need AI when something is predictable and repeatable - e.g. robots welding a car body don't need AI.

Ok, try this - 55 seconds in. The car has never seen traffic cones before, and it handles it and even leaves the road to maintain safety:

If fact everything is unanticipated, even driving down normal roads since that particular car doesn't have any object detection, mapping, path planning or control components as part of its programming.
 
Ok, try this - 55 seconds in. The car has never seen traffic cones before, and it handles it and even leaves the road to maintain safety:
It's never seen traffic cones before? It ran over the traffic cones the first time it saw them at the beginning of the video.
 
Humans need experience before they can handle unanticipated situations. Same with AI.
Will the AI "pilot" be required to have 1500 hours of flight time before it can be allowed to pilot an airliner?
 
Of course, because we now have the experience from UA232 to know that we need to program the AI for similar problems.
There are already deterministic methods for adapting to aerodynamic uncertainties and controllability changes that do not require "AI". They are already in service on unmanned stuff and they work quite well.

The problem with predicting what will be possible based only on what is possible is that you're always behind current advances that aren't in general use.

Nauga,
who is both adaptive and reconfigurable
 
That's exactly NOT AI. An AI system doesn't get pre-programmed for a specific purpose. To teach it in the first place, you give the system access to a simulator - and assign goals and cost, and then you tell the AI to minimize cost while still reaching the goal. You don't specifically teach the system how to fly the airplane, or even that it is an airplane - it learns it through doing, with continuous feedback and adjustment. So when you take away a control surface, like in the case of the drone video above with the props damaged, it can learn another way to fly using whatever controls and control surfaces are remaining.

AI can actually do better at humans with sudden unexpected scenarios. Let's say there was an explosive decompression and you only have your right control surfaces operational, an AI can quickly run a few thousand "hours" of simulator training up there by itself in a couple of seconds, to teach itself the best way to fly the airplane with whatever is remaining of it. A human can't do that. We have to learn in real time - AI doesn't.

The technology isn't quite there yet. The NVidia PX2 stuff that Tesla is using is starting to be pretty cool though. Take a look here - the system learns how to recognize cars and people without being programmed to do so - it was just shown a bunch of pictures of cars and people, and now it can recognize them:


Therein lies the rub. Did we choose the goals and assign the costs correctly for all situations? That's where the AI gets dangerous and what the cautious but knowledgeable people are concerned about. Assign a goal and a cost and the AI will minimize the cost while reaching the goal. It'll do it very well. But if there's some other goal that should be taken into account the AI will have no clue unless the humans training it tell it to. Imagine your explosive decompression example but without having told the AI to get below 10,000 feet ASAP. It'll fly the plane under control at 30,000 until it reaches a descent point to land (probably an emergency landing site I would hope). And the passengers will die of hypoxia. THAT'S the sort of gotcha that we'll, again, learn by trial and error.

And the NVidia Deep Learning stuff is very cool. And I want one of their deep learning computers. But $160K is a little out of my R&D budget...
 
Therein lies the rub. Did we choose the goals and assign the costs correctly for all situations? That's where the AI gets dangerous and what the cautious but knowledgeable people are concerned about. Assign a goal and a cost and the AI will minimize the cost while reaching the goal. It'll do it very well. But if there's some other goal that should be taken into account the AI will have no clue unless the humans training it tell it to. Imagine your explosive decompression example but without having told the AI to get below 10,000 feet ASAP. It'll fly the plane under control at 30,000 until it reaches a descent point to land (probably an emergency landing site I would hope). And the passengers will die of hypoxia. THAT'S the sort of gotcha that we'll, again, learn by trial and error.

And the NVidia Deep Learning stuff is very cool. And I want one of their deep learning computers. But $160K is a little out of my R&D budget...

You can probably set it as a goal that humans shouldn't be left at 30'000ft - but easier - you can let it discover that by itself in the simulator.

Assign e.g. $10m value to each passenger and it will quickly figure out that it's really expensive to fly around at 30'000ft while not pressurized...
 
Last edited:
I think there's an unreasonable fear of AI.. clumsy mistakes, ill intent, forgetfulness, having a "bad day" are all human fallibilities

Though not really AI, and I know a lot of pilots hate Airbus FBW as a point of principal, but I happen to feel safer as a pax when on Airbus.. and how many Tesla's have crashed when compared to the average person merging with no turn signal while updating Facebook. Nothing is perfect, but machines make fewer mistakes

It doesn't have to be Matrix or Skynet.. it can also be Star Trek's Data or the AI machines on Interstellar
 
What about killer robots? Are we all doomed?
 
My question with AI always comes down to impossible situations. Let's use a car as an example. For some reason this car has gotten into a situation where it will either hit (and presumably kill) pedestrians or will kill the occupant(s) of the car. What choice does it make?

What if there are 2 pedestrians and one occupant? What if one person in the scenario is 89 years old and another is 5? What if one is pregnant?

Now let's presume that the AI makes a choice and kills someone. We know there will be a lawsuit. This is America after all. Who gets blamed? There is no "driver" so I presume it would go back to the manufacturer and further go back to the company that programmed the software (and the engineers that made the ethics choice). What company is going to take on that kind of liability?

Now for the sake of argument let's assume that to overcome the liability problem we have an independent ASTM or SAE committee setup to determine the "correct" response to such an impossible situation. Let's say it comes to a consensus that the car should kill the occupants of the vehicle over the pedestrians. Who is going to purchase a vehicle knowing that it will choose to kill you even though there was another possible choice?

Good luck solving these problem. And I'm glad I don't have to be the one making those ethical choices. I'm not sure I could live with the decisions I made.
 
You can probably set it as a goal that humans shouldn't be left at 30'000ft - but easier - you can let it discover that by itself in the simulator.

Assign e.g. $10m value to each passenger and it will quickly figure out that it's really expensive to fly around at 30'000ft while not pressurized...
Sure you can. But which other constraint did you not assign a proper cost to? It's not the AI that worries me, it's the humans that train the AI that screw up.

I believe that computers can do a great many things better than I can. But I also know that even if they are safer , there will be human screw ups that will cost lives until we get lol the inputs correct. Maybe, even probably, less lives than leaving the same jobs to humans that get bored, have a bad day, etc. but still lives.

John
 
My question with AI always comes down to impossible situations. Let's use a car as an example. For some reason this car has gotten into a situation where it will either hit (and presumably kill) pedestrians or will kill the occupant(s) of the car. What choice does it make?
I think you kind of answered your own quandary there... "impossible situations" - what would any person make for a choice in any of these crazy situations? These are common moral dilemma vignettes that often present themselves as thought exercises

Frankly, while not perfect, I feel like people are far more fallible and dangerous behind a wheel than a machine. Maybe not us, we're all above average here on POA just like the nice folks of Lake Wobegon.. but elsewhere out there??
 
I think you kind of answered your own quandary there... "impossible situations" - what would any person make for a choice in any of these crazy situations? These are common moral dilemma vignettes that often present themselves as thought exercises

Frankly, while not perfect, I feel like people are far more fallible and dangerous behind a wheel than a machine. Maybe not us, we're all above average here on POA just like the nice folks of Lake Wobegon.. but elsewhere out there??

Definitely true, and if we could all agree on the best (most ethical) course of action in all situations then an AI would be much more likely than a human driver to deliver those results. Not only that, but the probability of ending up in those situations should drastically decrease or even approach 0.

But to get to that point some human somewhere must make a decision about who lives and who dies in every possible situation. As you point out, they are common thought exercises, but at some point with this technology someone will need to code those into real machines and realize that they are killing real people. A thought exercise and what essentially amounts to the murder of real living people are two very different things.

And to get to that point there will be lawsuits with large settlements.

In the end, I can't wait for driverless cars. I just think there are a lot of difficult ethical quandaries and liability issues that need to be solved first.
 
One question in my mind is, "How much freedom are you willing to give up to have driverless cars?" Will driving your own car become illegal? If so, that would seem to eliminate the freedom to use a motor vehicle at all in some locations. For example, it's hard to imagine a driverless car being able to deal appropriately with some of the mountainous forest service roads in poor condition that I drove on last summer. Will it become impossible to use a motor vehicle to visit such wild and woolly places?
 
What about killer robots? Are we all doomed?

Well, we don't actually have to arm the robots... Though I can see the vain on Wayne LaPierre's forehead pop out for even suggesting that. He'd arm my Roomba if he could.
 
My question with AI always comes down to impossible situations. Let's use a car as an example. For some reason this car has gotten into a situation where it will either hit (and presumably kill) pedestrians or will kill the occupant(s) of the car. What choice does it make?

What if there are 2 pedestrians and one occupant? What if one person in the scenario is 89 years old and another is 5? What if one is pregnant?

Now let's presume that the AI makes a choice and kills someone. We know there will be a lawsuit. This is America after all. Who gets blamed? There is no "driver" so I presume it would go back to the manufacturer and further go back to the company that programmed the software (and the engineers that made the ethics choice). What company is going to take on that kind of liability?

Now for the sake of argument let's assume that to overcome the liability problem we have an independent ASTM or SAE committee setup to determine the "correct" response to such an impossible situation. Let's say it comes to a consensus that the car should kill the occupants of the vehicle over the pedestrians. Who is going to purchase a vehicle knowing that it will choose to kill you even though there was another possible choice?

Good luck solving these problem. And I'm glad I don't have to be the one making those ethical choices. I'm not sure I could live with the decisions I made.

Mercedes Benz put a stake in the ground which is to ALWAYS prioritize the safety of the driver:

http://blog.caranddriver.com/self-driving-mercedes-will-prioritize-occupant-safety-over-pedestrians/

I think that is a reasonable approach. Everything else is an attempt to solve the trolley problem, which is inherently unsolvable.
 
Back
Top