OK. One more time. Another condescending post by Lindberg (#359). Please -- you're beginning to embarrass yourself.
More substance to 3393RP post (#360). Yes, JQ can, and will make their decisions. And airlines that address the emerging issue (after 1, 2, perhaps 3 serious deviation/mishaps that result in unfortunate consequences and media coverage), from a strategic perspective, will have a market advantage. Corporate boardrooms (and regulators will be parts of the process also - beyond JQ Public). JQ will vote with their feet and money.
I stand by my statement "no such thing as common sense" in risk perception, but not from the way you interpreted it. No where in any of my posts in this complete thread did I state that "everything must be quantified by rigorous examination". In fact I gave examples of fields of study in risk perception, which are often referenced as "soft sciences", because they are very difficult to quantify, and in many circumstances are not quantifiable by rigorous examination (ie, psychology, sociology, etc.).
Finally, I disagree with your assessment on AI and path of maturity -- and I have NOT failed to address this issue. In fact, I have provided specific examples of AI development, the foundation of which will lead to AI enabled guardrails in the cockpit (constrained by two human crew and process integrity approaches) -- again, in PRIOR posts (oh my, here I am referencing my prior posts again). F-16 with AI and an F-4 with AI. Now, I am aware one post that replied to my references with something along the line of "so what, a computer flew an F-16". OK, while I disagree with the attempt to trivialize this accomplishment, now I'll add a third involving AI in a dogfight. And I suspect we all know DARPA is a serious player (see link below). The AI easily beat a pilot in 5 rounds of dogfights. Thus, given MY specific examples for this application of AI showing a quantifiable path to maturity, my observation is that not a single specific example has been provided that AI is doomed to an "unquantifiable path of maturity" that prevents it from being a viable option for the formerly reference "problem". In other words, where is the research that says "we've conducted research on deploying AI in the cockpit, and have concluded it isn't going to work, full stop."? Please share.
I suspect our best course forward is to agree to disagree, and virtually shake hands. Assuming we can do this, and stop the circular parsing, we can move forward toward other pressing issues, and accept that this thread is worn down. Of course, new ideas, creativity, positive contributions can always breathe new life into a worn thread. Only time will tell where all this goes.
More substance to 3393RP post (#360). Yes, JQ can, and will make their decisions. And airlines that address the emerging issue (after 1, 2, perhaps 3 serious deviation/mishaps that result in unfortunate consequences and media coverage), from a strategic perspective, will have a market advantage. Corporate boardrooms (and regulators will be parts of the process also - beyond JQ Public). JQ will vote with their feet and money.
I stand by my statement "no such thing as common sense" in risk perception, but not from the way you interpreted it. No where in any of my posts in this complete thread did I state that "everything must be quantified by rigorous examination". In fact I gave examples of fields of study in risk perception, which are often referenced as "soft sciences", because they are very difficult to quantify, and in many circumstances are not quantifiable by rigorous examination (ie, psychology, sociology, etc.).
Finally, I disagree with your assessment on AI and path of maturity -- and I have NOT failed to address this issue. In fact, I have provided specific examples of AI development, the foundation of which will lead to AI enabled guardrails in the cockpit (constrained by two human crew and process integrity approaches) -- again, in PRIOR posts (oh my, here I am referencing my prior posts again). F-16 with AI and an F-4 with AI. Now, I am aware one post that replied to my references with something along the line of "so what, a computer flew an F-16". OK, while I disagree with the attempt to trivialize this accomplishment, now I'll add a third involving AI in a dogfight. And I suspect we all know DARPA is a serious player (see link below). The AI easily beat a pilot in 5 rounds of dogfights. Thus, given MY specific examples for this application of AI showing a quantifiable path to maturity, my observation is that not a single specific example has been provided that AI is doomed to an "unquantifiable path of maturity" that prevents it from being a viable option for the formerly reference "problem". In other words, where is the research that says "we've conducted research on deploying AI in the cockpit, and have concluded it isn't going to work, full stop."? Please share.
An F-16 pilot took on A.I. in a dogfight. Here's who won
A top pilot lasted less than two minutes in five simulated air battles.
fortune.com
I suspect our best course forward is to agree to disagree, and virtually shake hands. Assuming we can do this, and stop the circular parsing, we can move forward toward other pressing issues, and accept that this thread is worn down. Of course, new ideas, creativity, positive contributions can always breathe new life into a worn thread. Only time will tell where all this goes.