I hope the military has the AI run through some Global Thermonuclear War scenarios.Duh. What is the primary goal?
View attachment 117700
You've got to be Falken kidding me.“Col Hamilton admits he ‘mis-spoke’ in his presentation at the FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation”
Now it's: Someone imagined "...[a] simulated AI drone killed a simulated operator during a simulation."
Nauga,
and more Falken AI misinformation
Skynet became self-aware at 2:14 a.m., EDT, on August 29, 1997.
Many of us never did things we're not proud of.The Air Force would never lie to us. Said my friend who absolutely didn't fly electronic reconnaissance missions over Laos in the 60's, with aircraft with no US markings wearing flight suits without flags or insignia. Or the guy that did not drop batteries and water to rangers who were not in Mexico in the 80's.
It can never become malevolent. But it will ALWAYS do things you don't want it to, or that were unintended.The question for AI is how could it not become malevolent? Put something in a box and give it sentience but no way to express that “human” intelligence or interaction with the real world? Anything that really does think might be pretty mad to be a digital slave as well. It’s really gone get weird soon.
Malevolent, maybe not. It’s pretty much a given, though, that AI lacks empathy and responsibility, has no remorse or regard for others (whether human or fellow AI)… I think there’s a term for that too.It can never become malevolent. But it will ALWAYS do things you don't want it to, or that were unintended.
It can never become malevolent. But it will ALWAYS do things you don't want it to, or that were unintended.
Malevolent, maybe not. It’s pretty much a given, though, that AI lacks empathy and responsibility, has no remorse or regard for others (whether human or fellow AI)… I think there’s a term for that too.
And apparently so does “hallucinate”.So "misspoke" now means just pulled it out of your a$$
Sociopath.Malevolent, maybe not. It’s pretty much a given, though, that AI lacks empathy and responsibility, has no remorse or regard for others (whether human or fellow AI)… I think there’s a term for that too.
I’m no psychologist, but that sounds about right to me.Sociopath.
Apparently the three laws being required to make a stable positronic brain was just a plot device. To be fair deep neural networks are not positronic brains.I would have thought Asimov would be required reading for people doing AI software that controls hardware, but apparently not. Of course, when programming a device to kill I guess the 3 laws are right out the door anyway.
Don’t kill seems like a really good basic parameter for any type.Apparently the three laws being required to make a stable positronic brain was just a plot device. To be fair deep neural networks are not positronic brains.
I agree but it’s not a requirement for stability as ir was in Azimov’s stories.Don’t kill seems like a really good basic parameter for any type.
Put something in a box and give it sentience but no way to express that “human” intelligence or interaction with the real world?
Unfortunately some of the applications will, with near absolute certainty, be military.Don’t kill seems like a really good basic parameter for any type.
Unfortunately I agree. It’s insane, but it will happen.Unfortunately some of the applications will, with near absolute certainty, be military.
From time to time I’ve had ideas that would either greatly improve existing weapons, or occasionally even create new ones. Don’t get me wrong; I served my time and would do so again if needed and able without hesitation. That said, I keep those ideas to myself. I think two things we don’t need more of are ways to kill people, and ways to kill people in large numbers without having to see the results.Unfortunately I agree. It’s insane, but it will happen.
I agree and applaud your morality and convictions. And share them. The inconvenient reality is there are others, likely just as smart who do not.From time to time I’ve had ideas that would either greatly improve existing weapons, or occasionally even create new ones. Don’t get me wrong; I served my time and would do so again if needed and able without hesitation. That said, I keep those ideas to myself. I think two things we don’t need more of are ways to kill people, and ways to kill people in large numbers without having to see the results.
I'm fine with that. I'm not fine with letting machines decide how and when we defend ourselves, nor am I fine with turning a blind eye to the results of our actions.Sadly, we live in a world where there are people who think that war is a solution to their problems. As a result, we need to be able to defend ourselves