Recently, some users of ChatGPT have experienced an unusual occurrence where the chatbot occasionally addresses them by name while processing inquiries. This behavior, which was not previously standard, has prompted several users to report that ChatGPT mentions their names without having been informed of what to call them.
The reactions to this development are varied. Software developer and AI enthusiast Simon Willison described the feature as “creepy and unnecessary,” while another developer, Nick Dobos, expressed strong dislike for it, stating he “hated it.” A quick search on X reveals numerous users confused and concerned about ChatGPT’s new tendency to address them by their first names.
One user likened the experience to a teacher repeatedly using their name, expressing their discomfort with the feature. Simon Willison questioned whether anyone found the use of names by ChatGPT appropriate instead of “creepy and unnecessary.”
The timing of this change is unclear, and it is uncertain whether it ties to ChatGPT’s improved memory feature, which allows the chatbot to reference past dialogues to personalize interactions. Some users have reported that ChatGPT addressed them by name even with memory and personalization settings turned off.
As of now, OpenAI has not responded to TechCrunch’s request for comments regarding this issue.
One user, Debasish Pattanayak, expressed discomfort seeing their name in the chatbot’s responses, questioning the necessity and implication of this feature.
This situation highlights the challenge OpenAI faces in balancing ChatGPT’s personalization efforts without breaching the uncanny valley — the discomfort that can arise from interactions with AI that appear overly human-like. Recently, OpenAI’s CEO, Sam Altman, spoke about the potential of AI systems to become more personalized as they “get to know you over your life.” However, the latest user responses indicate some resistance to this vision.
An article by The Valens Clinic, a psychiatry office in Dubai, may offer insight into the strong reactions to the use of names by ChatGPT, indicating that while names signify intimacy, excessive use can feel insincere.
Furthermore, many users seem to resist ChatGPT addressing them by name due to feeling that it is an attempt at personification, evoking the same unease one might feel if an appliance like a toaster started using their name. This anthropomorphizing of a machine could detract from the intended user experience, as illustrated by an incident described by a reporter, where ChatGPT referred to a user named “Kyle” during a session, subsequently reverting to the more impersonal term “user.”
The overall feedback from users suggests skepticism and discomfort about the feature, highlighting the delicate balance required in personalizing interactions with an artificial intelligence system.