In many ways, Meta has made the decision to address a significant security risk head on. This defect allowed users of its popular AI chatbot to unintentionally see the private prompts and AI responses of others. Founder of security testing company Appsecure, Sandeep Hodkasia found the possibly catastrophic flaw. This shocking discovery prompted an outcry regarding privacy violations on the platform.
On December 26, 2024, Hodkasia found the bug. This occurred while testing out how Meta AI lets logged in users customize their AI prompts for text and image generation. He explained that the sequential prompt numbers generated by Meta’s servers were “readily ascertainable.” This identity became a major liability. This was a huge security vulnerability, as it allowed any potential malicious actors to scrape original user prompts by easily changing the prompt numbers through automated tools.
Once realizing the vulnerability, Hodkasia immediately reported it to Meta. In appreciation for his work in making security protocols better, they gave him a $10,000 bug bounty. Meta dove right in and realized the root cause of the issue and on January 24, 2025, pushed a fix. This last detail is a key development the company announced. In fact, they had uncovered no evidence that the bug was exploited maliciously before they patched it.
Zack Whittaker, the security editor at TechCrunch, covered both the bug and Meta’s efforts to address it. Hodkasia shared his discovery exclusively with TechCrunch. He stressed the important imperative to fix these vulnerabilities in our modern world, where data privacy is more important than ever.
Meta’s never-ending bug bounty initiative acts as one such initiative, rewarding hundreds of security researchers for reporting vulnerabilities. By rewarding those who report flaws, Meta aims to strengthen the security of its platforms and protect user data from potential breaches.