

Bio and memory are optional in ChatGPT though. Not so in others?
The age guessing aspect will be interesting, as that is likely to be non-optional.
Just a regular Joe.


Bio and memory are optional in ChatGPT though. Not so in others?
The age guessing aspect will be interesting, as that is likely to be non-optional.


The LLMs aren’t being assholes, though - they’re just spewing statistical likelihoods. While I do find the example disturbing (and I could imagine some deliberate bias in training), I suspect one could mimic it with different examples with a little effort - there are many ways to make an LLM look stupid. It might also be tripping some safety mechanism somehow. More work to be done, and it’s useful to highlight these cases.
I bet if the example bio and question were both in russian, we’d see a different response.
But as a general rule: Avoid giving LLMs irrelevant context.


I agree. What you get with chatbots is the ability to iterate on ideas & statements first without spreading undue confusion. If you can’t clearly explain an idea to a chatbot, you might not be ready to explain it to a person.


They rolled this update out mid-journey, and I had to scramble to swap seats with the manequin driver. Not cool, Elon.
Not. Cool.
Indeed. Additional context will influence the response, and not always in predictable ways… which can be both interesting and frustrating.
The important thing is for users to have sufficient control, so they can counter (or explore) such weirdness themselves.
Education is key, and there’s no shortage of articles and guides for new users.