They are already better than Doctors at diagnosing diseases...let me know when an AI bot gets an MBA...
They are already better than Doctors at diagnosing diseases...let me know when an AI bot gets an MBA...
let me know when an AI bot gets an MBA...
My wife's neurosurgeon is one of the best in the field. He told us that they are using AI to evaluate MRI and MRA images and evaluate the risk of strokes and rupturing aneurysms, and that AI is doing a fantastic job of it.They are already better than Doctors at diagnosing diseases...
Yep. Some of that work goes back to research done 20-25 years ago by the national cancer institute for early detection of breast cancer by looking at MRIs taken over a period of years. The NCI is part of NIH and is in the process of getting its budget cut dramatically.My wife's neurosurgeon is one of the best in the field. He told us that they are using AI to evaluate MRI and MRA images and evaluate the risk of strokes and rupturing aneurysms, and that AI is doing a fantastic job of it.
This is exactly what we realized a couple years ago. For software development, you basically treat it like a junior employee and tell it exactly what you want. It's usually at 90% correct, or better.I use it daily. Treat it like a "Junior level employee" that you can delegate scudwork to and you won't be disappointed. Ask the Junior to do Senior work and... you'll get about the same result as it enters a rabbit-hole of its own making and you get to fish it out![]()
Yeah, people underestimate how much of a programmers job (much less a designer, architect or product owner) is translating the business need into the application. AI is, currently, super effective at taking precise english instructions and converting them into python/c++/sql/javascript.Until the LLMs can map from the business speak to technical needs, mid or senior level engineers are going to be necessary. And even then, how far do you trust what an purely AI generated application creates or does?
Me too. I was telling someone earlier that I have a gray hair for every memory leak I've dealt with in C/C++ over the decades (I have a lot of gray hair). AI should be pretty effective at solving those problems. But that's going to be a programmer aid, rather than a programmer replacement, for a long time.I'm pretty sure I'll be retired by the time it reaches the level some people believe is imminent where it can do it all. But I do believe it will eventually get to a point where a large percentage of the software industry will be an AI babysitter and troubleshooter outside of specialized niches.
It already knows more than most MBA graduates ...let me know when an AI bot gets an MBA...
let me know when an AI bot gets an MBA...
You'll know. It'll go around randomly firing really good people because "they're too expensive".let me know when an AI bot gets an MBA...
That's just the thing - That's what software is. Creating precise instructions.Yeah, people underestimate how much of a programmers job (much less a designer, architect or product owner) is translating the business need into the application. AI is, currently, super effective at taking precise english instructions and converting them into python/c++/sql/javascript.
Someone else needs to do each of the pieces first for it to learn from... Or you need to be more detailed in your description.With current solutions, I can't even tell an AI something as simple as, "write an iphone app that accepts the FAA data stream and displays current traffic on a zoomable vfr sectional".
So as I understand the story, an employee with zero programming experience was able to get gpt to write a piece of code to automate some menial task they were doing. I don't know enough about programming to understand what exactly they were doing, but he mentioned "300 lines of code". I presume it's taking data from one software package and plugging it into another. I believe one of our IT guys was watching over their shoulder. The story was that it took three iterations; the first code did something wrong, which they told the AI, and it fixed it. That happened again and the third try worked correctly. We are strictly limiting the number of people who are empowered to use it at this time, but several thousand man hours have been saved already. The nature of the grain elevator business is that there's LOTS of contracts and transactions that have to be reconciled, which is pretty low hanging fruit for this kind of stuff. One of the topics of discussion at the meeting was implementing an official AI policy.I sure hope that's just someone speaking a little out of context. ChatGPT is a great tool to act as an assistant to a skilled engineer. It's a terrible idea for someone that knows zero about programming to start slinging production code with ChatGPT.
GPT, please let me introduce Mr. Clemens...I then asked which of the three I picked were open on Sunday. It said all three were. I then asked for the websites, which it did give me successfully. By visiting their websites, I quickly determined all three are closed on Sunday, and found their menus quickly. So I just went to google to finish my research.
It's excellent that a practical use case was discovered and has tangible output and is making life easier.So as I understand the story, an employee with zero programming experience was able to get gpt to write a piece of code to automate some menial task they were doing. I don't know enough about programming to understand what exactly they were doing, but he mentioned "300 lines of code". I presume it's taking data from one software package and plugging it into another. I believe one of our IT guys was watching over their shoulder. The story was that it took three iterations; the first code did something wrong, which they told the AI, and it fixed it. That happened again and the third try worked correctly. We are strictly limiting the number of people who are empowered to use it at this time, but several thousand man hours have been saved already. The nature of the grain elevator business is that there's LOTS of contracts and transactions that have to be reconciled, which is pretty low hanging fruit for this kind of stuff. One of the topics of discussion at the meeting was implementing an official AI policy.
Most of us "IT guys" don't spend our time automating non-IT business tasks just because. If the business identifies a need and asks for it, then sure, we'll get involved - but that's typically a project.It's excellent that a practical use case was discovered and has tangible output and is making life easier.
But my takeaway from this is perhaps a little more harsh. Primarily that the IT guys who supervised development should perhaps consider a new line of work, or perhaps rolling their sleeves up and getting more involved with the workers and understanding areas for workflow improvement.
I say that because any automation task that is achievable by only a handful of prompt iterations by a non-techie, which already has produced a tangible ROI of thousands of man hours saved, is an automation task those IT guys almost certainly should have already done.
In the firms I've worked at and where I've managed it's been much different. Your boss is going to ask where you're spending your time.Most of us "IT guys" don't spend our time automating non-IT business tasks just because. If the business identifies a need and asks for it, then sure, we'll get involved - but that's typically a project.
Lots of drudge work doesn't get automated, because nobody bothers to ask for it. IT staff, whether programmers or infrastructure or ops, have their own work to do. If jobs are going to be automated, those jobs will typically be infrastructure and ops/SRE first. If somebody's sitting at their desk playing spreadsheet copy/paste half the day, well, that's the job. If you want it automated and dno't know how to do it yourself, then you can ask, but typically nobody is going to come around once a month and ask whether you have any odd jobs that need to be done. We know better.
Right, but the key here is that the worker or the manager is going to engage IT. If nobody does that and the person is sitting there doing easily automated drudge work, the blame is not on IT for not knowing about it.In the firms I've worked at and where I've managed it's been much different. Your boss is going to ask where you're spending your time.
If you say "I'm moving data from A to B manually" or "I'm reformatting data A to format B manually", they're going to say "that's a really bad use of time". And within a few days you're going to have IT on the phone finding a way to automate it for you (that's assuming you didn't contact them under your own initiative already). At that point you just have to hope you have other things on your plate to keep you busy, or you're out of a job.
That's totally fair. I guess if the culture doesn't really encourage it, you can't blame them for not just automatically knowingRight, but the key here is that the worker or the manager is going to engage IT. If nobody does that and the person is sitting there doing easily automated drudge work, the blame is not on IT for not knowing about it.
Do you happen to move jobs often? I’ve seen far more buisnesses with that attitude in management go out of business than ones that can even keep their doors open. It might work to show a couple quarters of increased earnings and get you a good performance bonus, but I think it’s a terrible strategy for the buisness on a two year projection. Ya, you can trim some fat, but when your workforce all see it your good workers will immediately start looking for work at different companies, and find it.At that point you just have to hope you have other things on your plate to keep you busy, or you're out of a job.
I don't switch too often.Do you happen to move jobs often? I’ve seen far more buisnesses with that attitude in management go out of business than ones that can even keep their doors open. It might work to show a couple quarters of increased earnings and get you a good performance bonus, but I think it’s a terrible strategy for the buisness on a two year projection. Ya, you can trim some fat, but when your workforce all see it your good workers will immediately start looking for work at different companies, and find it.
There's a good chance that AI can assist in explaining what is being requested. I'd recommend using ChatGPT.I have zero knowledge of AI and can barely operate a keyboard; keep that in mind with my request.
I have a 20 page form to complete (ie many questions on each page, some with checkboxes) and within the form are requirements to generate, de novo, other related documents - and I find the terms and verbiage mind-boggling. (I have a university degree, too)
I cannot find any of these (completed, or 'example') forms or documents on the internet.
Is it likely that AI is going to be able to help?
Where would I begin?
Can I put the names of the form or additional documents on an AI website and hope to receive any help?
Or, provide and AI service with a pdf of the form?
It's all aviation-related.
and almost as much as a kindergartener.It already knows more than most MBA graduates ...
Absolutely. There's guys like my buddy who ask it a random question about quantum mechanics at dinner, then show us all the screen filled with long descriptions and mathematical formulas to "prove" that it has a PhD level understanding of quantum physics. Doesn't matter than he has zero chance of ever validating that the answer is correct. But some people really are convinced by just the presence of an intelligent sounding/looking answer. Dangerous!but yeah, you'd better already know what you're doing. I'm a scientist and I've been working on a mineral called dolomite. I actually just did it for fun, so I wasn't trying to seriously do research. I asked ChatGPT to tell me about dolomite and right away spotted errors.
This is a problem with textual response too, though it presents itself more obviously in images. It has trouble disassociating certain words from particular outputs.No matter how detailed my instructions were, it insisted on putting pterosaurs in the picture, even if I specifically instructed it not to include pterosaurs (none are know from the Navajo Sandstone). It would put some correct plants in, but with pterosaurs, so I would ask it to keep those plants and get rid of the pterosaurs, but every time it generated a new image, it would change the plants and add pterosaurs. The images it did generate were pretty nice, but unusable
It's true that good prompts are key to getting the results. But there are plenty of cases where no amount of prompt engineering is going to get you there because the system is not capable.Prompt engineering is a science in itself. One has to know how to use AI. If you don't, you won't get good results. Sh** in, sh** out very much applies to AI.
Yesterday I asked chatgpt how gene hackman and his wife died. It said they were both very much alive. I replied saying no they are both dead. It asked me to cite a source. I replied “literally any source on the planet”. It came back with the full overview of everything that was known up until that point about their death. I asked ‘can you clarify why you said they were alive when they are in fact dead?” It said it was using the info it had up until I pushed it to take another look. Really weird.
Yeah, this generation of technology is not a replacement for google when it comes to current events. These models take something like a week to a month to train. The actual number seems to be a corporate secret that hasn't been replaced. I also don't know how often the models are retrained as that also seems to be a secret. It's definitely massively expensive to do the training, so I wouldn't be surprised if the retraining was as infrequent as they can get away with.I've actually gotten replies much the same. It spits out something obviously wrong and when it is pointed out with a reference it replies, "thanks for the update" ...
One of the major issues that needs to be worked out with these models is that they don't seem to be able to simply say "I don't know". They are currently designed so that they have to give an answer, with the confidence of a politician or mid-level corporate manager, regardless of its accuracy.I had the same experience. About a year ago or so I asked ChatGPT a couple of questions for which I already knew the answers. All of the answers are easy enough to find from publicly available sources, and have been there since well before 2020. The answers it gave were wrong in every possible way. I pointed out the errors, got an apology, and a fresh set of different but equally incorrect answers other than the one fact that I had corrected. I repeated that cycle a few times, never getting even close to the correct information.
If I'm taking my mind off things, sometimes I'll go down a prompt rabbit hole and see how deep it goes before getting nonsensical results. I thought this one was amusing because it's one that a human artist could easily do a good job with, but the AI didn't seem able to combine patterns across knowledge domains. So, instead of saying, "I don't know", it came up with this drawing to represent "a tiger-striped, giraffe-necked, antlered hippo reacting to wheat flour allergy with sneezing, watery eyes, and a slightly swollen snout."One of the major issues that needs to be worked out with these models is that they don't seem to be able to simply say "I don't know". They are currently designed so that they have to give an answer, with the confidence of a politician or mid-level corporate manager, regardless of its accuracy.
If I'm taking my mind off things, sometimes I'll go down a prompt rabbit hole and see how deep it goes before getting nonsensical results. I thought this one was amusing because it's one that a human artist could easily do a good job with, but the AI didn't seem able to combine patterns across knowledge domains. So, instead of saying, "I don't know", it came up with this drawing to represent "a tiger-striped, giraffe-necked, antlered hippo reacting to wheat flour allergy with sneezing, watery eyes, and a slightly swollen snout."
View attachment 138637