AI Rant

The only use case for spell check in my universe is to automatically flag typos since I’m not prefect on a keyboard, and am horrible typing with thumbs on a phone screen.
Hahahahaha.
Well done.
 
I think predicting the next word is essentially the idea behind the Large Language Models that ChatGPT and its ilk build upon to produce human-like responses to queries.

I think all my example were examples of A.I., albeit crude ones.
Not very long ago, I took a machine learning class in grad school, right before Hinton rocked the world. The technical parts were great, and it was easily among the top 5 most challenging/most rigorous/most rewarding academic experiences I've ever had. That said, the non-technical parts were equally memorable and excellent, among them:

* The "What is AI?" conversation from the first lecture. Of course, many folks took the bait and threw out "human-like performance for a particular problem", to which the professor responded, "Why would you want to perform that badly?" Human pilots don't fly us to space; are Kalman filters AI?

* Capabilities are on a continuum and only get better. We used various games as an example. At the time, computers could play checkers, backgammon, and chess, but go was a pipe dream. (One kid wrote a poker bot; he's probably retired now.) Now, of course, go has been solved.

At the time, robot path planning was also pretty hard. If a person needed to pick up a rectangular object and hand it to someone through a small window, the person knows how to orient the object (and their arm!) to pass it through the window without hitting anything. Now we have roombas that can detect objects, identify them as dog poop, and avoid them. I didn't see that one coming.

In undergrad, we joked that natural language processing is AI-complete, meaning you can only solve NLP if you already have an AI at your disposal. Now, machine translation and LLMs are common. Also in undergrad, protein folding was basically intractable, with the even the best methods barely better than a coin toss. Now, that's basically solved too, which has been coming in awfully handy lately.

I think this is one of those moments in history, like the steam engine, where the engineering jumped ahead of the science, and a lucky confluence of the right mathematical tools and the hardware to run them on now permits us to brute force train stuff that wasn't possible a generation ago. Hang on and enjoy the disruptive/asymmetric technology ride.
 
It helps to define what we're talking about. AI is not just complex logic, but automated systems that detect patterns in data, make decisions based on those patterns, and make better decisions with more data. Such systems are inherently probabilistic and unpredictable. Put differently, if you know what you want the computer to do, don't use AI but write the code yourself. It will be simpler, far more efficient, and predictable.

A Kalman filter is not AI. It is an error correction algorithm. Every step in the logic is explicitly programmed. Given a set of inputs it is repeatable and deterministic, not probabilistic or learning, dynamically changing its decisioning like AI.

Decades ago people used to say that once it actually works, nobody calls it AI anymore. Over the past few years this has changed, with effective practical applications for AI. Yet the biggest problem is the hype. Since AI does some things better than humans, people incorrectly infer that it has human attributes like understanding or awareness.

Aviation related content: the computer can fly the plane (or drive the car) better than a human. But only most of the time, in scenarios that closely resemble how the computer was coded and trained. As soon as you encounter an unusual scenario that doesn't resemble its training, the computer's performance suffers and it can do unpredictable crazy stuff, making mistakes that no human would make. Of course humans do make mistakes but they make different kinds of mistakes.

When it comes to everyday tools like phones, home appliances, etc. I find the "smart" features to be counterproductive. They make the devices overly complex and expensive, less reliable with planned obsolescence. All while providing little or no benefit, features I neither need nor want. I have to throw the circuit breakers on my washer, dryer and dishwasher every week or so because random functions stop working for no reason. Mechanically there's nothing wrong with the drums, motors, belts, etc. That hard reset reboots it and fixes the buggy logic for a while.
 
About two years ago the stock market went A.I. crazy. The financial surge in the 'Tech. Sector' like NAVIDIA (NVDA) is because an algorithm successfully modified it's own code and improved itself. That has brought ChatGPT and other "Generative A.I." forward for public use. The big bet at this point is that A.I. Algorithms will soon be able to do a 'Total Re-write of Algorithm Code' without human intervention. Basically, totally writing itself and doing it at incredible speed.

Transportation industry has become one of the biggest pushers of Generative A.I. that writes itself. Re-Creating itself every time the Flight Control Operating System (BVLOS drone) has an unintended outcome. The latest Flight Operating Systems make hundreds of modifications and corrections in a single A to B flight. Once the algorithm is able to write itself from scratch then it will do so a hundred times in that one A to B flight.

Generative A.I. performs code writing complexity beyond a human's capability. Writing a complete Drone Operating System from every experience many times over every trip is coming....
HunterkillerDrone.jpg
 
and don't worry - it'll be perfectly safe.

what could go wrong?
 
Back
Top