Actually using "AI"

Jim K

360 For Spacing
Management Council Member
PoA Supporter
Joined
Mar 31, 2019
Messages
6,462
Location
An undisclosed derelict engine shop
Display Name

Display name:
Richard Digits
Was talking to the CFO of our grain cooperative yesterday...they are actually using chatgpt to write code to automate repetitive processes. Apparently these things are actually starting to become useful. I played with Grok for an hour or so this afternoon. It was fun, but it was not as powerful as I thought. It did a great job answering some simple questions, but stuff I thought it would be able to do, like sleuth out tail numbers I've flown after I gave it my POA handle, or finding images of my plane on the web (which Google is very good at). I also learned that non-members here are limited to very small thumbnails, so even when I told it which thread to look at, it still couldn't read the number from a clear close up side shot.

Anybody find any actual uses for these things? Next time I plan a trip I'm going to see if it can assist with that. I'm also waiting for the next repetitive task I have to do. I'm kicking myself because I just spent an hour or so updating my depreciation spreadsheet for my taxes, something "AI" SHOULD be able to do easily.

Anybody tried all the various models to see which one works best? Grok was cool in that I could ask it about current events and it could find answers. It also seemed to be more willing to admit when it was guessing than GPT, although I haven't used the newest GPT models yet.

How about subscription costs?
 
free chatgpt has written several automation scripts for me or at least created the foundation of them with me tweaking as needed. haven't found a reason to use it for aviation yet.
 
I’ve used both chatgpt and Claude for trip itineraries and recommendations.

FrEx, we’re going to Banff later this year so I asked it both to give me a list of things to do, places to see, and where to eat and they spit out good enough itineraries that we’ll use 80%+ of the suggestions.
 
I asked MicroSoft CoPilot the following and got this answer: (Note the first attribution)

What is the best time of year to fly a small single engine airplane from Florida to Maine?

Flying a small single-engine airplane from Florida to Maine can be an incredible journey, but timing is key to ensure a smooth and safe trip. The best time of year for this flight is typically during the late spring (May to June) and early fall (September to October). During these periods, you can expect more stable weather conditions, fewer thunderstorms, and less turbulence compared to the summer months when thunderstorms are more frequent.

Additionally, flying during these times allows you to avoid the extreme cold and potential icing conditions that can occur in the winter months. It's also a good idea to plan your flight during daylight hours to take advantage of better visibility and to avoid flying in low-light conditions.

Remember to always check the weather forecasts and NOTAMs (Notices to Airmen) before your flight, and be prepared to adjust your plans if necessary. Safe travels and enjoy the scenic views along the way!

: Coast to Coast in a small plane - Pilots of America: Best IFR cross country single engine : r/flying - Reddit
 
Last edited:
I asked MicroSoft CoPilot the following and got this answer: (Note the first attribution)
But I mostly us it to ask cooking questions for things like recipes or ingredient substitutions. It gives me answers without making me read an essay.
I also ask it inflammatory political questions because even if it disagrees with me it doesn't get mad at me and make me sleep on the couch.
 
Last edited:
Was talking to the CFO of our grain cooperative yesterday...they are actually using chatgpt to write code to automate repetitive processes. Apparently these things are actually starting to become useful. I played with Grok for an hour or so this afternoon. It was fun, but it was not as powerful as I thought. It did a great job answering some simple questions, but stuff I thought it would be able to do, like sleuth out tail numbers I've flown after I gave it my POA handle, or finding images of my plane on the web (which Google is very good at). I also learned that non-members here are limited to very small thumbnails, so even when I told it which thread to look at, it still couldn't read the number from a clear close up side shot.

Anybody find any actual uses for these things? Next time I plan a trip I'm going to see if it can assist with that. I'm also waiting for the next repetitive task I have to do. I'm kicking myself because I just spent an hour or so updating my depreciation spreadsheet for my taxes, something "AI" SHOULD be able to do easily.

Anybody tried all the various models to see which one works best? Grok was cool in that I could ask it about current events and it could find answers. It also seemed to be more willing to admit when it was guessing than GPT, although I haven't used the newest GPT models yet.

How about subscription costs?
I'd consider myself an AI power user. I use it daily. 80% GPT, 20% Anthropic/Claude. My usage is a pretty good split between using the browser version versus programmatic API calls.

If you foresee yourself using it more, I'd recommend the $20/month GPT subscription w/OpenAI. That gives you a lot of functionality and it's super easy to use, and I think they did away with any practical usage limits. I haven't hit any limits in over a year, I think.

In the last month I've used it:
- To help me write a web scraper (for generating market share info in the Golden Age of GA thread)
- To generate an audio transcript of a financial call
- And probably 10x to help find the name for something (it's really good at just taking your written description of something and telling you what the technical name is you're referring to), or for getting an overview on a topic
- Build a really mean LinkedIn post classifier
- Get information about the best pdf extraction libraries, with example use cases for Python

My estimate is that the AI systems will be eventually be a massive boon for education, especially for basic topics for kids K-12. They're like an unpaid tutor that never gets tired of answering your questions and is content coming up with new questions/examples all day long. The biggest shortcoming right now is that models are terrible at assessing their own logic and accuracy. So if you don't know what you're doing or asking for it can easily be a case of the blind leading the blind.

And despite what's in the news about AI being on the brink of replacing humanity, it's still incredibly stupid. It's great to use for simple tasks if you intend to review its work product. But in no case would I use it for anything (and I do mean anything) where I wasn't reviewing the results and the output is even mildly important.
 
Anybody find any actual uses for these things?
I've been using Llama-3.2-3B-Instruct-Q8_0 for document summarization. It took a while to get the prompt dialed in and to figure out most of the quirks in the input data that needed cleaning up. After that, works pretty well.
How about subscription costs?
What are those? :)
 
I don't use it so much in my personal life. But I use it at work. I'm not going to say publicly what I do, but when you're managing massive amounts of data in an enterprise environment it certainly has its use cases.
 
I use it daily. Treat it like a "Junior level employee" that you can delegate scudwork to and you won't be disappointed. Ask the Junior to do Senior work and... you'll get about the same result as it enters a rabbit-hole of its own making and you get to fish it out :)
 
I jumped in with both feet on the AI scene a few weeks ago.

I used ChatGPT to generate Python code for a different (diy) AI project. It was simple stuff like parsing text files and creating, moving, and renaming files, and I could have definitely remembered all the syntax to do it in an hour or so, but I pulled up ChatGPT and told it wanted it to do, and I had tweaked the code and run the code doing the task I needed to do in about 15 minutes. It was so dang easy.

Currently, I'm working on training a custom YOLO Object Detection model that I can use in a RaspberryPi based bird feeder camera project.
 
AI for software development is kinda funny... Like AI for writing other things, it's a great way to get started, but you should still know what you're doing and plan on lots of editing to get it right.

However, management types think you can get a perfect program that works exactly how you want on your first prompt... No.

It requires tweaking, and you end up with a very specific set of instructions to get the exact behavior you want.

A very specific set of instructions to get the exact behavior you want.
A very specific set of instructions to get the exact behavior you want.
A very specific set of instructions to get the exact behavior you want.

You know, writing A very specific set of instructions to get the exact behavior you want used to be called "programming" or "development" or "coding". :rolleyes:

I do think it'll take some time off of the early stages of software development, but you'll still need experienced developers who understand the business needs to get a good finished product.
 
I do think it'll take some time off of the early stages of software development, but you'll still need experienced developers who understand the business needs to get a good finished product.
I'm thinking it will be that way for a little while longer, but within a few years we won't need programmers or writers or teachers or . . . "Some machine will be doing that for you".
 
I don't use it so much in my personal life. But I use it at work. I'm not going to say publicly what I do, but when you're managing massive amounts of data in an enterprise environment it certainly has its use cases.
Splunk ES? Just spitballin'.
 
it's called an algorithm and it's more of an art than a science.

AI is a joke... it's basically Google on steroids.
What's Google's market cap? Not sure I could classify something that is that, but 'on steroids ', as a joke :dunno:
 
What kind?
I would argue that generative AI based on LLMs is non-deterministic. You can get different results from the same input.
It's just a piece of software like any other; make the PRNG seeds fixed and it's deterministic.

Of course, it's a totally different kettle of wax with *quantum* blockchain zero-trust serverless AI...
 
Is AI deterministic?
No.
The "temperature" parameter in GPT allows you to essentially dial up and down the randomness. Crank it to 0 if you want "deterministic". Though its still not truly deterministic.
Though the bigger annoyance is minor, minor changes in the prompt that lead to wrong answers.

I'm thinking it will be that way for a little while longer, but within a few years we won't need programmers or writers or teachers or . . . "Some machine will be doing that for you".
AI is so terribly bad at following instructions and keeping track of its own response that I can't envision this happening.
Eventually maybe.. But a few years? Not a chance I don't think.
 
We use it quite a bit at work to develop automated vision inspections for products on the production machinery I design, but the controls guys handle that, I haven't worked with that software myself, I'm on the mechanical side. Mostly for inspections that are hard to quantify... it's easy to define simple geometric inspections without AI, but it's good for things like "this is a minor scratch that's acceptable, but that is an unacceptable defect". Show the AI program a thousand good parts and as many bad ones as you can find and tell it which is which, then turn it loose on the next batch of parts.

But I did use an AI image generator to create a logo for my Rose Parrakeet... all Parrakeets have names, and the builder of my plane registered it as a "Gypsy Rose" so not being an artist I asked the AI site for an image of a "parrakeet dressed as a gypsy holding a rose", ever and over with minor edits to the text input. A lot of the images were absolute garbage (and never the same image twice), but I got one I liked (the lettering is not from the AI, it's adapted from the original 1930s logo):

1740620547637.png
 
I'm thinking it will be that way for a little while longer, but within a few years we won't need programmers or writers or teachers or . . . "Some machine will be doing that for you".
But when your toilet stops working at 7:00 am on thanksgiving when you’re supposed to be hosting your in-laws, plumbers will still have a job..
 
But when your toilet stops working at 7:00 am on thanksgiving when you’re supposed to be hosting your in-laws, plumbers will still have a job..
Naah, Elon's robot will fix it.

OK, Elon's robot will fix it by ripping the toilet right off its mount and carrying it into the kitchen during dinner, where the turds will be dropped on the food and the robot will notice the clog has cleared and reinstall the toilet. "Fixed." :rofl:
 
I've been working with AI for a little over 2 years now. It's been an incredible journey.
There are a lot of corporations that are shifting from increasing employee productivity to reducing staffing requirements, so it will affect some roles.
The biggest challenge is that, for junior employees that don't know any better, it can be misleading and impact their work more than someone with more experience that can call bs when AI hallucinates.
 
I use it quite a bit for work. Lots of great applications.
Something ChapGPT isn't good at? Route suggestions. I asked it to plan out a simple, 70 nm day VFR trip, avoiding a class C. It did manage to keep me out of class C, but the directions were unusable gibberish.
 
I use it daily. Treat it like a "Junior level employee" that you can delegate scudwork to and you won't be disappointed. Ask the Junior to do Senior work and... you'll get about the same result as it enters a rabbit-hole of its own making and you get to fish it out :)
:yeahthat:
The biggest challenge is that, for junior employees that don't know any better, it can be misleading and impact their work more than someone with more experience that can call bs when AI hallucinates.
:yeahthat:

Also, junior employees who try to pass off LLM output as their own and then vapor lock when you ask them to justify their professional opinion.
 
Was talking to the CFO of our grain cooperative yesterday...they are actually using chatgpt to write code to automate repetitive processes.
I sure hope that's just someone speaking a little out of context. ChatGPT is a great tool to act as an assistant to a skilled engineer. It's a terrible idea for someone that knows zero about programming to start slinging production code with ChatGPT.
 
Naah, Elon's robot will fix it.

OK, Elon's robot will fix it by ripping the toilet right off its mount and carrying it into the kitchen during dinner, where the turds will be dropped on the food and the robot will notice the clog has cleared and reinstall the toilet. "Fixed." :rofl:

About 2 minutes and 3 iterations with AI. We are living in a magical time, folks.

1740671232080.png
 
Last edited:
I do think it'll take some time off of the early stages of software development, but you'll still need experienced developers who understand the business needs to get a good finished product.

Yep. I think this is the key - someone still needs to know what the heck is going on to be able to proof-read it. It was really slick for what I was doing. It was very menial code, but I just didn't want to take the time to re-remember the syntax for it. Once it gave me the results, I could quickly and easily see what it was doing and it was spot-on. Validate -> Copy -> Paste -> Execute -> Go read PoA with the spare time it saved. It could be VERY dangerous without the Validate as the first step, though.
 
I’ve been working with OpenAI pretty much ever since it came out. It’s super useful. I’ve built a number of solutions for my own businesses but also for other businesses and municipalities on a consultation basis. I do think that a lot of people, including me, will lose their livelihoods to AI in the next 5 years if they don’t pivot to something that AI can’t do. Nobody is going to hire me as a consultant to help them implement AI into their workflow because AI will do it better, faster and cheaper. Same applies to developers, admin staff, supply chain, logistics, and more.
 
Best use yet is pointing it at your conspiratorially minded uncle and let it do that hard work of helping him see the light so you don't risk your own sanity.

"To do so, we developed a pipeline for conducting behavioral science research using real-time, personalized interactions between research subjects and AI. Across two experiments, 2190 Americans articulated—in their own words—a conspiracy theory in which they believe, along with the evidence they think supports this theory. They then engaged in a three-round conversation with the LLM GPT-4 Turbo, which we prompted to respond to this specific evidence while trying to reduce participants’ belief in the conspiracy theory (or, as a control condition, to converse with the AI about an unrelated topic).

RESULTS​

The treatment reduced participants’ belief in their chosen conspiracy theory by 20% on average. This effect persisted undiminished for at least 2 months; was consistently observed across a wide range of conspiracy theories"
 
So, if AI can fill the role of junior/less experienced programmers, engineers, etc. but cannot fill the role of senior/more knowledgable people...how are any more senior/more knowledgable people going to exist ten or twenty years after AI takes over the lower positions and takes away the opportunities to learn and become a more knowledgable/experienced programmer or engineer? There are a lot of things you can't learn from school or a book that you learn on the job.

My job won't be threatened by AI any time soon, if ever, but if it is true that AI won't reach the ability to replace senior positions within the next ten or twenty years, it is extremely shortsighted to start replacing anyone with AI right now.
 
So, if AI can fill the role of junior/less experienced programmers, engineers, etc. but cannot fill the role of senior/more knowledgable people...how are any more senior/more knowledgable people going to exist ten or twenty years after AI takes over the lower positions and takes away the opportunities to learn and become a more knowledgable/experienced programmer or engineer? There are a lot of things you can't learn from school or a book that you learn on the job.

My job won't be threatened by AI any time soon, if ever, but if it is true that AI won't reach the ability to replace senior positions within the next ten or twenty years, it is extremely shortsighted to start replacing anyone with AI right now.

At the pace in which AI is developing, it absolutely will be able to take over senior/more knowledgable people's roles in 20 years from now - no doubt about it.
 
I use it a lot. ChatGPT is best for summarizing large documents and outlining the key points.

Grok 3.0 I think just passed ChatGPT for writing code. Collecting summary data out of large data sets is pretty good. Scraping available public info seems to be very good.

At work, we use it to search large data sets for fraud trends and identify high risk applications.

Yeah, not going to replace senior analysts for a while. But Jr analysts, soon...

You still need to know how to write good query's.

In specialized applications (robots, self driving) it's a lot more advanced than is generally acknowledged. Or so I think....
 
Back
Top