• The 79
  • Posts
  • ChatGPT can now remember all of your previous conversations

ChatGPT can now remember all of your previous conversations

But many users think it's not actually a good idea

OpenAI just began rolling out a significant update to ChatGPT, enabling a persistent memory feature that allows the AI to recall details across a user's entire conversation history. Announced by CEO Sam Altman and initially available to Plus subscribers, the feature aims to make interactions more personalized and efficient.

However, early user reactions, particularly highlighted in discussions on Reddit and X, reveal a divide between users who appreciate this new feature for the convenience it provides and those who are concerned over privacy, control, and performance.

The core promise of the new memory feature is “continuity”. Instead of starting fresh with each new chat, ChatGPT can now remember details such as:

  • your preferences

  • facts about your life or work

  • specific project details

  • interaction styles mentioned in previous sessions

For users leveraging ChatGPT for ongoing tasks like coding, writing, or business strategy development, the potential upside is clear: less time spent repeating context and a more tailored AI assistant. Some users who had access to earlier experimental versions shared positive experiences, noting the AI's ability to recall details like pet names, ongoing projects, or even recent personal events like a family member's injury.

Privacy Concerns

Despite these potential benefits, a wave of worry has swept through the ChatGPT’s user base. The most prominent backlash is about privacy. The idea of OpenAI compiling a comprehensive, long-term profile based on all user interactions struck many as inherently "creepy" or a "privacy nightmare."

People have fears about data security, potential misuse for advertising or other purposes, and the unsettling prospect of a private company possessing such intimate knowledge. "If AI systems know me I do not want it to be accessible to some private entity," one user stated in a recent discussion on r/singularity subreddit. While OpenAI emphasizes that users can opt-out and manage memories, skepticism remains about true data control and deletion.

Is ChatGPT’s Enhanced Memory a Practical Feature?

Beyond privacy concerns, users raised practical questions about the feature's implementation and reliability. A major worry is "cross-contamination" which is best described as the AI inappropriately mixing context from different conversations. Details from a casual, personal chat might bleed into a professional query, or vice-versa, leading to irrelevant or meaningless outputs.

"It pulls in random information from other chats and adds more non determinism to your prompts," complained one user who reportedly experienced issues with earlier memory tests. Others worried about the feature creating an echo chamber, where the AI primarily validates past inputs rather than providing objective feedback or novel suggestions, particularly problematic for tasks requiring critical evaluation.

Early-testers have expressed a desire not just for a simple on/off switch, but for the ability to enable memory selectively for specific projects or folders, explicitly exclude certain sensitive chats, or easily enter a "temporary chat" mode that doesn't contribute to the long-term memory bank.

People are confused about rollout process as well. Many users believed a form of cross-chat memory already existed, likely due to staggered experimental releases and inconsistencies of users’ experiences in different regions or ChatGPT tiers. Furthermore, the feature is notably unavailable in the EU, UK, and several other European countries, presumably due to regulatory hurdles like GDPR.

While some users consider the new memory feature as a significant step towards more capable, personalized AI, others found it disappointing compared to recent hype, which usually centers around Sam Altman's posts on X and his previous remarks about losing sleep over an upcoming release.