Meta AI Chatbot Vulnerability: Private Conversations Exposed
Discover how a critical bug in Meta AI's chatbot could have exposed private user conversations and the steps taken to mitigate the risk. Learn essential tips to safeguard your AI interactions.
TL;DR
A researcher uncovered a significant vulnerability in Meta AI’s chatbot that could expose private user conversations. The bug allowed unauthorized access to prompts and responses due to easily guessable identification numbers. Meta has since fixed the issue, but users are advised to take precautions when using AI tools to protect their privacy.
Meta AI Chatbot Vulnerability Exposes Private Conversations
A researcher recently disclosed a critical vulnerability in Meta AI’s chatbot that could have allowed anyone to view private user conversations. The bug, which assigned easily guessable unique identification numbers to edited prompts, enabled unauthorized access to private interactions1.
Discovery and Impact
On June 13, it was reported that the Meta AI app publicly exposed user conversations, often without users’ knowledge. These conversations were accessible through the app’s Discover feed, raising concerns about privacy. Although Meta initially did not acknowledge this as a bug, further investigation revealed a more serious issue.
Security researcher Sandeep Hodkasia discovered that the Meta AI chatbot assigned unique numbers to queries resulting from edited prompts. By analyzing the network traffic generated during prompt editing, Sandeep figured out how to change these identification numbers. This allowed him to view other users’ prompts and AI-generated responses, posing a significant privacy risk1.
Meta’s Response
Meta confirmed that the bug was fixed on January 24, 2025, after being reported by Sandeep on December 26, 2024. The company stated that there was no evidence of abuse and that the issue had been resolved. However, the incident highlights the ongoing challenges in ensuring security and privacy in rapidly evolving AI technologies2.
How to Safely Use AI
To protect your private information while using AI tools, consider the following tips:
- Avoid Linking Social Media Accounts: If you’re using an AI tool developed by a social media company, ensure you are not logged into your social media account to prevent linking personal information.
- Understand Privacy Settings: Familiarize yourself with the AI tool’s privacy settings and use “Incognito Mode” when available. Avoid sharing conversations unless necessary.
- Avoid Sharing Personal Information: Do not feed any AI tool your private information.
- Review Privacy Policies: Read and understand the privacy policies of AI tools. Use AI to summarize lengthy policies if needed.
- Protect Personally Identifiable Information (PII): Never share personally identifiable information (PII).
Protect Your Social Media Accounts
Cybersecurity risks should never extend beyond a headline. Safeguard your social media accounts by using Malwarebytes Identity Theft Protection.
For more details, visit the full article: source.
Conclusion
The discovery of this vulnerability in Meta AI’s chatbot underscores the importance of vigilant security practices in AI development. As AI continues to evolve, ensuring robust privacy and security measures will be crucial. Users can take proactive steps to protect their information while engaging with AI tools, and companies must remain committed to addressing and mitigating such risks promptly.
References
-
(2025, July 15) “Meta fixes bug that could leak users’ AI prompts and generated content” TechCrunch. Retrieved 2025-07-17. ↩︎ ↩︎2
-
(2025, July 17). “Meta AI chatbot bug could have allowed anyone to see private conversations” Malwarebytes. Retrieved 2025-07-17 ↩︎