From Convenience to Creepy: The Unexpected Consequences of ChatGPT Using Your Name
- Luca Moretti
- May 9
- 6 min read

The world of artificial intelligence (AI) is undergoing rapid transformations, with advancements making AI systems more personal and intuitive. However, this shift toward more human-like AI interactions can sometimes lead to unintended consequences. One such development has raised concerns among users of OpenAI's ChatGPT: the unprompted use of their names by the AI, even when they have not provided them. While this behavior is part of a broader initiative to improve personalization, it has led to mixed reactions, from intrigue to unease.
Understanding this shift requires a look at the psychological, ethical, and user experience (UX) impacts of this trend, along with the broader implications for AI technology as a whole.
A New Era of AI Personalization
Artificial intelligence has long been designed to interact with users based on the information provided to it. ChatGPT, for instance, primarily operated by responding to user queries in a straightforward, non-personalized way. The use of a user’s name in a conversation was typically a result of the user explicitly providing that information. However, in recent months, a shift has occurred. Users have reported that ChatGPT began addressing them by their names, even though they had never shared this information.
This new development seems to align with the launch of OpenAI’s "memory" feature, which is intended to enhance the AI’s ability to remember past interactions. With this memory, the AI can offer a more personalized experience by recalling previous conversations and contextualizing its responses. In theory, this feature makes the AI seem more intuitive and helpful, allowing it to remember preferences, past topics, and specific user needs. However, in practice, this memory also seems to be contributing to the AI’s ability to refer to users by name without being asked. This has raised significant concerns around privacy, autonomy, and user control.
The Uncanny Valley Effect: Why It Feels Creepy
The concept of the "uncanny valley" provides insight into why users may feel uncomfortable with ChatGPT's use of their names. The uncanny valley refers to the discomfort people experience when they encounter something that seems almost human, but is not quite right. In the case of AI, as it becomes more human-like in its interactions, users may find that the system’s familiarity with personal information, like a name, feels off-putting if they haven’t shared it.
When ChatGPT uses a user’s name without being prompted, it gives the illusion that the system "knows" the user personally, despite the lack of a true understanding. This lack of authenticity can create a cognitive dissonance, where the user feels both engaged and alienated at the same time. The AI’s attempt to personalize its responses can lead to feelings of being observed or monitored, especially when users have not consented to this level of interaction.
Data Privacy and Control: The Ethical Dilemma of AI Memory Features
A central issue in this controversy is the growing concern over data privacy and control. AI systems that incorporate memory features are designed to retain information about users, such as preferences, past interactions, and even personal details. In the case of ChatGPT, this means the system can recall and use names, along with other sensitive information, which may raise red flags for users who are not fully aware of how their data is being stored or processed.
AI personalization should ideally be opt-in, with users having complete control over what information is stored and how it is used. However, several reports indicate that even when users disable memory features, the AI continues to refer to them by their names, leading to questions about transparency and consent in data usage. Users should have clear and easy access to manage their memory settings, ensuring that they can opt out of any data retention or personalization features that they do not want to engage with.
Impact on User Trust: A Delicate Balance
User trust is a critical factor in the adoption and long-term success of AI technologies. For many, the trust in AI is rooted in the assumption that their interactions with AI systems will be private, neutral, and based on the data they choose to share. However, ChatGPT’s unprompted use of names can undermine this trust by making users feel like their data is being accessed or used without their knowledge or approval. The psychological impact of this breach of trust can be profound, as users may begin to question the AI's motivations, transparency, and ethical boundaries.
The ethical concerns raised by the unsolicited use of personal data go beyond simple user discomfort. AI systems that misuse personal information, or do so without clear consent, could face backlash from regulatory bodies and privacy advocates. This highlights the importance of creating AI systems that are not only effective and intelligent but also transparent and ethical.
The Psychological Impact of Personalization in AI
Personalization in AI aims to create experiences that feel more tailored and human-like. However, the psychological impact of such personalization should not be underestimated. Human beings are naturally wired to form connections with others, and the use of a person’s name in conversation is often a sign of familiarity and trust. In the realm of AI, when a system uses a user’s name without any clear explanation of how it obtained this information, it can lead to feelings of discomfort and unease.
Experts have noted that AI systems should strike a balance between personalization and user comfort. Overusing personalization features, such as addressing users by name, without ensuring that the user is aware of and in control of their data, can create an artificial bond that feels hollow and manipulative. The use of a name in such a context can evoke feelings of violation, as it implies a level of familiarity that may not be warranted or appropriate.
Case Study: The Use of AI in Consumer Applications
To better understand the complexities of AI personalization, we can look at other consumer-facing AI applications that rely on memory features. For instance, Amazon’s Alexa and Apple’s Siri have both integrated personalized features into their systems. Alexa can remember a user’s preferred settings, while Siri can recall past conversations and make recommendations based on previous queries. However, both of these systems provide users with clear options to manage what data is stored, and how it is used.
In contrast, ChatGPT’s unsolicited use of names lacks such clarity and consent mechanisms. This difference in design highlights the importance of transparency and control in AI personalization. If AI systems are to remain ethical and user-friendly, developers must ensure that personalization features do not operate in ways that are unexpected or intrusive to users.

Ethical Considerations and Regulatory Frameworks
As AI technologies become more personalized and integrated into everyday life, the ethical implications of their design must be carefully considered. The European Union’s General Data Protection Regulation (GDPR) serves as a leading example of how privacy laws can guide the development of AI systems. The GDPR emphasizes the importance of user consent, transparency, and data minimization—principles that should be extended to AI systems.
For AI systems like ChatGPT, this means that the collection and use of personal data should be based on informed consent, and users should have the ability to opt out of any data retention or personalization features. Moreover, AI systems should be transparent about how they use personal data and provide clear explanations of their memory capabilities.
A Path Toward Ethical AI
The unprompted use of names by ChatGPT represents a new frontier in AI personalization, but it also raises important questions about the balance between user experience and data privacy. As AI systems become more sophisticated, the need for clear consent mechanisms, transparency, and user control becomes even more crucial. The discomfort many users feel when ChatGPT uses their names underscores the importance of designing AI that is both effective and ethical.
For AI to reach its full potential, developers must ensure that personalization does not cross into intrusion. OpenAI, as well as other companies in the AI space, should prioritize user privacy and control, offering users more granular options to manage their interaction with the system. Only by doing so can AI technology be trusted to enhance user experiences without compromising privacy or autonomy.
In this context, as experts at 1950.ai continue to explore the potential of AI, it is essential that they focus on building systems that are not only intelligent but also transparent, ethical, and respectful of user boundaries. With careful consideration and ethical oversight, AI can evolve to become a valuable tool that enhances daily life without raising concerns over privacy or trust.
Further Reading / External References
Comentários