I’ve just played with Chat GPT a little bit. One frustration is the answers often border on “Maybe yes, maybe no”. Here are some recent queries:
“Is melatonin useful for insomnia?”
“In an aircraft engine, does lower rpm mean less wear?”
“Are squats good for SI joint pain?”
In each case, the answer bordered on, “It depends”. Part of the issue, I suppose, is that many things in life are not black and white, and an AI needs to present both sides. Unlike Google, you have no idea of the provenance of the reply. For instance, which option is from WebMD and which is from some alternative medicine site.
Another such was “Does using the word “provenance” in an online post come across as pretentious?”
The reply:
“It can be perceived as pretentious by some, depending on the context and audience. If the word is used in a casual setting or in a way that is not relevant to the topic, it could be seen as an attempt to sound sophisticated and could come across as pretentious to some readers. However, if the word is used in a professional or educational context where its definition and use are appropriate, it may not be perceived as pretentious.”
In other words, maybe yes, maybe no.
Anyway, ChatGPT itself has a take on the dilemma:
“Why do your answers often seem wishy washy?”
“As an AI language model, I aim to provide the most accurate and neutral information possible. When answering questions, I often include nuances and multiple perspectives to account for different interpretations or situations. This can sometimes come across as "wishy-washy," but it's meant to convey a balanced and comprehensive answer. However, I understand that sometimes a straightforward answer may be preferred, and I will try my best to provide that in the future.”
Well, at least it will try to do its best!