I think predicting the next word is essentially the idea behind the Large Language Models that ChatGPT and its ilk build upon to produce human-like responses to queries.
I think all my example were examples of A.I., albeit crude ones.
Not very long ago, I took a machine learning class in grad school, right before Hinton rocked the world. The technical parts were great, and it was easily among the top 5 most challenging/most rigorous/most rewarding academic experiences I've ever had. That said, the non-technical parts were equally memorable and excellent, among them:
* The "What is AI?" conversation from the first lecture. Of course, many folks took the bait and threw out "human-like performance for a particular problem", to which the professor responded, "Why would you want to perform
that badly?" Human pilots don't fly us to space; are Kalman filters AI?
* Capabilities are on a continuum and only get better. We used various games as an example. At the time, computers could play checkers, backgammon, and chess, but go was a pipe dream. (One kid wrote a poker bot; he's probably retired now.) Now, of course, go has been solved.
At the time, robot path planning was also pretty hard. If a person needed to pick up a rectangular object and hand it to someone through a small window, the person knows how to orient the object (and their arm!) to pass it through the window without hitting anything. Now we have roombas that can detect objects, identify them as dog poop, and avoid them. I didn't see that one coming.
In undergrad, we joked that natural language processing is AI-complete, meaning you can only solve NLP if you already have an AI at your disposal. Now, machine translation and LLMs are common. Also in undergrad, protein folding was basically intractable, with the even the best methods barely better than a coin toss. Now, that's basically solved too, which has been coming in awfully handy lately.
I think this is one of those moments in history, like the steam engine, where the engineering jumped ahead of the science, and a lucky confluence of the right mathematical tools and the hardware to run them on now permits us to brute force train stuff that wasn't possible a generation ago. Hang on and enjoy the disruptive/asymmetric technology ride.