With regard to notations of AI math being incorrect. Yes, it is possible to pay admission for the cheap seats, and get a cheap AI system. The marketplace can be brutal. With regard to personal innuendos and slights, these are sophomoric at best, and don't contribute to the quality of the discussion, and simply point to the general observation that without further infusion of creative ideas, this thread is worn out. AI guardrail systems within the context of airline passenger transport are inevitable. It is only a matter of time.
Let me suggest that rather than relying on websites like Defensescoop for your information on AI, you spend some time reading publications of actual engineering professional societies. The IEEE has published many articles on AI, and for starters you might want to take a look at the October 2021 issue of IEEE Spectrum. There are several AI articles in that issue, and I recommend you start with "The Turbulent Past and Uncertain Future of AI" and "7 Revealing Ways AIs Fail."
A couple of major challenges with AI are "catastrophic forgetting" and "brittleness," both of which can easily crash an airplane. Catastrophic forgetting is the tendency of an AI to entirely and abruptly forget information it previously knew after learning new information. These systems have a terrible memory. Brittleness is the inability of an AI system to respond appropriately to slight changes in patterns it has previously learned. AIs struggle with "mental rotation" of images, something that is trivial for human intelligence (I show you a picture of an object and tell you it's a beer glass; then I show you a picture of it lying on its side, and you can still tell me it's a beer glass. AIs often can't.).
In discussing brittleness, the article cites a few examples:
"Fastening stickers on a stop sign can make an AI misread it. Changing a single pixel on an image can make an AI think a horse is a frog. ... Medical images can get modified in a way imperceptible to the human eye so that AI systems misdiagnose cancer 100 percent of the time."
Regarding math errors, this isn't just a problem in ChatGP. It seems to be (at least so far) an inherent weakness in AI systems. UC-Berkley's Dan Hendrycks states, "AIs are surprisingly not good at mathematics at all. You might have the latest and greatest models that take hundreds of GPUs to train, and they're still just not as reliable as a pocket calculator." UC-Berkley trained an AI on hundreds of thousands of math problems with step-by-step solutions, then tested it using 12,500 high school math problems. The result? The AI only had 5% accuracy.
Still think AI is viable? Still want it making life-critical decisions in a cockpit?
Look, people have been working on AI for decades. Despite the current hype, this isn't a new field. While there's been progress, and AI can perhaps assist with a few things and be used as one more tool in the box, it simply hasn't arrived yet when it comes to protecting human life, and if you read what experts are actually writing in the real engineering publications, instead of relying on pop press hype, you'll see that it has a long, long way to go.
Last edited: