Horizon Jumpseater goes crazy

OK. One more time. Another condescending post by Lindberg (#359). Please -- you're beginning to embarrass yourself.

More substance to 3393RP post (#360). Yes, JQ can, and will make their decisions. And airlines that address the emerging issue (after 1, 2, perhaps 3 serious deviation/mishaps that result in unfortunate consequences and media coverage), from a strategic perspective, will have a market advantage. Corporate boardrooms (and regulators will be parts of the process also - beyond JQ Public). JQ will vote with their feet and money.

I stand by my statement "no such thing as common sense" in risk perception, but not from the way you interpreted it. No where in any of my posts in this complete thread did I state that "everything must be quantified by rigorous examination". In fact I gave examples of fields of study in risk perception, which are often referenced as "soft sciences", because they are very difficult to quantify, and in many circumstances are not quantifiable by rigorous examination (ie, psychology, sociology, etc.).

Finally, I disagree with your assessment on AI and path of maturity -- and I have NOT failed to address this issue. In fact, I have provided specific examples of AI development, the foundation of which will lead to AI enabled guardrails in the cockpit (constrained by two human crew and process integrity approaches) -- again, in PRIOR posts (oh my, here I am referencing my prior posts again). F-16 with AI and an F-4 with AI. Now, I am aware one post that replied to my references with something along the line of "so what, a computer flew an F-16". OK, while I disagree with the attempt to trivialize this accomplishment, now I'll add a third involving AI in a dogfight. And I suspect we all know DARPA is a serious player (see link below). The AI easily beat a pilot in 5 rounds of dogfights. Thus, given MY specific examples for this application of AI showing a quantifiable path to maturity, my observation is that not a single specific example has been provided that AI is doomed to an "unquantifiable path of maturity" that prevents it from being a viable option for the formerly reference "problem". In other words, where is the research that says "we've conducted research on deploying AI in the cockpit, and have concluded it isn't going to work, full stop."? Please share.


I suspect our best course forward is to agree to disagree, and virtually shake hands. Assuming we can do this, and stop the circular parsing, we can move forward toward other pressing issues, and accept that this thread is worn down. Of course, new ideas, creativity, positive contributions can always breathe new life into a worn thread. Only time will tell where all this goes.
 
Annd, I'm done.

Although technology scored a resounding victory, the controlled conditions of the F-16 simulation doesn’t mean that the program could have beaten a human in real combat. Col. Daniel “Animal” Javorsek, who oversees the A.I. piloting program at DARPA, said the results come with “plenty of caveats and disclaimers.”
 
Last edited:
Signing off here.
Good choice.

One thing I always observed in my 50+ years in the Aircraft Design, Operation and Maintenance Business, was “Just because something can be done, doesn’t mean it should be done” be it AI in the cockpit or jumping off a bridge with an umbrella for a parachute.

Cheers
 
Following up on …
TCABM. #13. October 22, Alaska Airlines Flight 2059 operated by Horizon Air from Everett, WA (PAE) to San Francisco, CA (SFO) reported a credible security threat related to an authorized occupant in the flight deck jump seat...
I have no idea the context of what you have written. Allow me to repeat the question, although I’m not not sure I can make it much simpler.

What is the publishing date of the book you’re writing?
 
"A computer can win a dogfighting game so, therefore, a computer is the best solution to detect when a pilot has developed bad intentions, wrest control of an airliner from him, and fly the airliner to a safe outcome," is an extraordinary logical leap, even for POA.
 
Creativity….

What if we exterminate psychedelic mushrooms?

And the third person in the cockpit doesn’t have to be a pilot…

I don’t know if AI will satiate the perception problem either… But if it would, I could see AI based adaptive environmental control systems being put in the cockpit and then the public being told that airliners now have AI and suicidal pilots can’t crash a plane now.

Kinda like all the special training we DIDNT get after 9/11. Telling the public that got it seemed to really help, when in fact we didn’t.
 
um yeah... bringing up that F-16 de facto DCS trials is not a good point at all. Oof, where to begin. i'll let my buddy mover do the big items (skip to 12:31 for the debrief items)



So those caveats the fair general intimate in the DARPA article very much include the fact HAL had the total energy telemetry of the human pilot, just like any video game. Basically the human was shadowboxing himself. I won't belabor the point and to remain UNCLASS, this simply wouldn't be the case in real life. HAL wouldn't have the sensor information for it.

The other assumptions that negate the reality of real combat and real flight, mover does a great job parsing through. The point about simulator pilots and real pilots is also great. There's a couple fanboi DCS youboober channels that like to invite fighter pilots to smoke them in DCS, and as has been highlighted in the past, the cues, and physiological elements that go into BFM in real life are completely moot for the sterilized academic setting of a DCS console. I'd love for the usaf to let us bring one of these inFluEncErs onto a BFM ride and put their money where their mouth is. It'd be a short sortie of course, having to KIO for physiological on the part of our orientation rider.... :cornut: :rofl:

I don't mean this rebuttal to shut down the discussion on AI, but sometimes staying in your proverbial lane can help you become more persuasive. This one was a Pk miss by a mile.
 
:rofl: Oh man, that’s hysterical!

You’re assuming a fact not in evidence: that AI is a viable solution. AI can’t get Basic math correct (see the exchange between @FastEddieB and me from a few months back), AI has been seen presenting false and fabricated information as fact, etc.

AI “viable?” Oh, that’s rich.
FWIW, AI can absolutely get basic math correct. But the LLM approach taken for chatgpt is specialized for creating text, not doing math. You want a math-bot, you would use a different algorithm. Saying that AI can't do basic math because chatgpt can't do basic math is like saying that it's impossible to build roads because you can't put enough cement into a subaru to move it to the work site. Horses for courses...
 
FWIW, AI can absolutely get basic math correct. But the LLM approach taken for chatgpt is specialized for creating text, not doing math. You want a math-bot, you would use a different algorithm. Saying that AI can't do basic math because chatgpt can't do basic math is like saying that it's impossible to build roads because you can't put enough cement into a subaru to move it to the work site. Horses for courses...


Please read my subsequent post regarding UC-Berkley’s work on training AI to do math. AI only achieved 5% accuracy.
 
Please read my subsequent post regarding UC-Berkley’s work on training AI to do math. AI only achieved 5% accuracy.
Read it now. Thanks.

What I could google didn't tell me what models they were using, but I suppose I don't have any good reason to believe that they somehow were smart enough to get funding for a bunch of GPU time, but dumb enough to choose the worst model for doing math. There had been similar discussions around chatgpt, so I naively assumed that's what the context was that you were talking about.

I guess this just leads me back to something I've suspected for a long time, this stuff isn't going to be solved with one tool. AIs, with the hardware we have today, are already very good at pattern matching (e.g. https://www.wired.com/story/ai-hurricane-predictions-are-storming-the-world-of-weather-forecasting/). Probably some smart kid is doing their phd right now on blending AI with matlab and making something new and differently capable as a result, rather than trying to figure out how to make AI do math on its own.
 
...this stuff isn't going to be solved with one tool.
DING DING DING! Winner!

Agree 100%.


AIs, with the hardware we have today, are already very good at pattern matching (e.g. https://www.wired.com/story/ai-hurricane-predictions-are-storming-the-world-of-weather-forecasting/).

Well, yes and no. Read up a bit on AI "brittleness." One of the challenges is that you're never sure just what AI is latching onto for recognition, and very slight (even trivial) changes to an image, for example, can result in wild errors.

Plus, when faced with something new, AI doesn't necessarily make good decisions. For example:

I can just picture the Tesla owner, calling "Come, Tesla! Here boy, here boy! Come!" And here comes the happy car, wagging its tail, running to its master, crunching right into airplane that's in the path.

Probably some smart kid is doing their phd right now on blending AI with matlab and making something new and differently capable as a result, rather than trying to figure out how to make AI do math on its own.

Yep, I'm sure. There's already been some work trying to inteface with Wolfram Alpha.

This stuff is coming, but it's not ready for prime time today. We are years and lots of development away from being able to integrate it into a cockpit safely and allow it to override a human pilot.
 
Well, yes and no. Read up a bit on AI "brittleness." One of the challenges is that you're never sure just what AI is latching onto for recognition, and very slight (even trivial) changes to an image, for example, can result in wild errors.

Plus, when faced with something new, AI doesn't necessarily make good decisions.

Maybe I'll look dumb in 50 years for ever having had this opinion, but that stuff doesn't worry me much. Humans aren't exactly great with new information either, or effective at changing their minds when presented with new inputs. I also think there is a camp of people that demands perfection from AI, while I'm firmly in the "perfect is the enemy of the good enough" camp. If an AI is just a little bit better than humans at a task, that's still better and should be used for that task. And different scenarios have different rules. You know the old saying about designing bear boxes, "the problem is that there is significant overlap between the smartest bears and the dumbest humans". There are probably a lot of situations where AI is useful to prevent the dumbest humans from doing dumb things, and other situations where the smartest humans are the only ones that can do a particular task for the foreseeable future.

I saw your other thread with my buddy @rwellner98 about project management and it seems like the three of us end up mostly on the same page on most stuff and just enjoy arguing about the details we don't.
 
Last edited:
One of the problems that has been mentioned about AI in the realm of self-driving cars is that the amount of testing required to prove that they really are safer is massive.
 
I'd love for the usaf to let us bring one of these inFluEncErs onto a BFM ride and put their money where their mouth is. It'd be a short sortie of course, having to KIO for physiological on the part of our orientation rider.... :cornut: :rofl:
:rofl:
 
Yee-haw. This sure is a spicy thread, and with late breaking news that has direct bearing on the topic, one more post seems appropriate “for the good times.".

NEWS Flash-> FAA Launches Rulemaking Committee On Pilots’ Mental Health. - "With concerns raised over whether pilots fear seeking treatment for mental health issues, the FAA announced today (Nov. 9) it will establish a Pilot Mental Health Aviation Rulemaking Committee (ARC) to address the issue.”

Difficult to predict where this new rulemaking committee will go with new rules, but I’m willing to put a wager that once the FAA new rulemaking committee on pilots’ mental health is done - doing their thing, the ATP community will be begging for AI in the cockpit, if it means that they can waiver out of the new rules and procedures for verification of mental health as a part of medical screening. Just an observation and comment. BOHICA folks! Said my piece. Made my case. Signing off - again.

BTW - not sure what post #380 is referring to, but it does sound a bit “off putting”, maybe even trolling - rules of conduct?
In the mean time, I’m heading over to the thread on “Best Puke Bag” in Flight Following - what a kick - I’m thinking the 55 gallon bag is top solution thus far! LOL!
 
There is a grain of truth in the idea that the FAA is capable of over-reacting to things.
 
A bit of a tangent, but... They did a movie about the Sioux City crash, the crew able to save many of the passengers from a completely loss of hydraulics. They did a movie about Sully, of course. Netflix even did a movie about the MCAS problem. I think it would be worth a short movie, though, to cover one or more of the times that MCAS tried to kill an airplane and the pilots corrected it. Do a bit on the pilots and their background, hand flying little planes like they all did, doing instruction or whatever, then fast forward to the spinning dials, override, correction, everybody goes home and has cheeseburgers. Maybe that would be too boring?
 
Do a bit on the pilots and their background, hand flying little planes like they all did, doing instruction or whatever, then fast forward to the spinning dials, override, correction, everybody goes home and has cheeseburgers. Maybe that would be too boring?


Yes, too boring. BBQ’d spare ribs would be much better.
How about the back story is that they are a bbq judge on the weekends and then they go back to judging and eating bbg ?

There could be a segment on staying up all night to keep the temperature right on some bbq. Their daughter comes out to kiss them goodnight . She talks about her fear that daddy might die in a plane crash . He talks to the grill about hanging it up but then saves everyone because he keeps the job.
 
I think it would be worth a short movie, though, to cover one or more of the times that MCAS tried to kill an airplane and the pilots corrected it. Do a bit on the pilots and their background, hand flying little planes like they all did, doing instruction or whatever, then fast forward to the spinning dials, override, correction, everybody goes home and has cheeseburgers....
Reminds me of something my initial instructor always said:

"Make that airplane do what YOU want it to!"​
 
The sense I got from reading that was the pilot regrets those "30 seconds" when he pulled the levers. I don't see anything about regretting being stupid eating psychedelic mushrooms. What pilot does that? We have to answer questions every six months about drug use. It's not like he didn't know he shouldn't be doing that. That right there is enough for me to lack any kind of empathy for his situation, let alone the subsequent in-flight events.

Of course, the article could just not be quoting the parts about him regretting the mushrooms.
 
He ate mushrooms because he didn't feel he had access to other mental health care, unless you count alcohol.

It's almost as if restricting medical care leads people to seek care elsewhere, where it is less safe... mushrooms, or back alleys...

I'm not saying he chose wisely, but "zero empathy" seems a bit much. It is possible to disagree with someone's choice and empathize with them.
 

Good on him for accepting responsibility, and maybe his story will lead to more opportunities for pilots with mental health issues to get help and not sacrifice their careers. Sounds like it already has.
 
He ate mushrooms because he didn't feel he had access to other mental health care, unless you count alcohol.

It's almost as if restricting medical care leads people to seek care elsewhere, where it is less safe... mushrooms, or back alleys...

I'm not saying he chose wisely, but "zero empathy" seems a bit much. It is possible to disagree with someone's choice and empathize with them.
How does the logic work that leads to the conclusion that use of illegal psychoactive drugs is a safer alternative than seeking access to real mental-health care, or even talking to a priest. You can't even argue that it's a "safer" FAA alternative because legally you have to disclose it on your medical application, just like therapy, and unlike therapy for something minor (if his illness was) using shrooms is going to automatically ground you. So if you're willing to use mushrooms and lie about it, why not just talk to a therapist and lie about it?
 
How does the logic work that leads to the conclusion that use of illegal psychoactive drugs is a safer alternative than seeking access to real mental-health care, or even talking to a priest. You can't even argue that it's a "safer" FAA alternative because legally you have to disclose it on your medical application, just like therapy, and unlike therapy for something minor (if his illness was) using shrooms is going to automatically ground you. So if you're willing to use mushrooms and lie about it, why not just talk to a therapist and lie about it?
It goes "if I see a real medical professional there is a 'real' medical report out there and I could get busted."
<looks around the room> I don't see anyone saying his logic is sound, but the evidence is overwhelming; when you restrict access to care, people will seek it in less safe places/ways.

Without doubt the FAA's stance on mental health has the effect of restricting pilot's willingness to seek professional care.
 
To Dr Brent Blue in the above video, I’d say I’d want neither one flying me around.

 
Last edited:
I have a hard time believing he was still having hallucinations 48 hours after he ingested them. I can't say it's impossible but believe it is improbable.

1724610424027.png

1724610516900.png

 
Back
Top