

Interesting. Is it interpreting the prompt as some sort of Caribbean patois and trying to respond back in kind? I’m not familiar enough to know if that sentence structure is indicative of that region.
If that’s the case, it makes sense that the answers would be lower quality because when patois is written, it’s almost never for quality informational content but “entertainment” reading.
Probably fixable with instructions, but one would have to know how to do that in the first place and that it needs to be done.
Interesting that this causes a problem and yet it has very little problem with my 3 wildly incorrect autocorrect disasters per sentence.
It’s not the clarity alone. Chatbots are completion engines, and responds back in a way that feels cohesive. It’s not that a question isn’t asked clearly, it’s that in the examples the chatbot is trained on, certain ties of questions get certain types of answers.
It’s like if you ask a ChatGPT what is the meaning of life you’ll probably get back some philosophical answer, but if you ask it what is the answer to life, the universe, and everything, it’s more likely to say 42 (I should test that before posting but I won’t).