Inside the privacy risks of Meta’s new AI app

Meta’s push into consumer-facing AI has triggered a wave of concern among privacy experts and regulators, as its new chatbot app quietly exposes sensitive user data through a public-facing feed – an issue that has surfaced in multiple jurisdictions, including the UK.
Launched in April, the standalone Meta AI app integrates directly with users’ Facebook and Instagram profiles, offering both private and shareable AI conversations.
But the ‘discover’ tab, a central design feature of the platform, has become a focal point of criticism, after users inadvertently published medical, legal, and financial details into a publicly accessible stream.
‘Private by default’
Despite Meta’s assertion that chats are private by default and can only be shared after a four-step opt-in process, cyber security specialists say the design encourages oversharing and fails to flag the visibility of shared posts adequately.
In many cases, these public conversations include real names, photos, and contact information associated with users’ Meta accounts.
“There is clear potential for data protection risks, particularly when it comes to personally identifiable or sensitive data”, said Calli Schroeder, senior counsel at the Electronic Privacy Information Center in Washington DC.
“Even when companies include warnings, if the user experience is confusing or if defaults lean toward exposure, that presents a compliance risk”.
Meta, which recently committed $14bn to AI-focused expansion through a deal with Scale AI, is positioning the Discover feed as a social layer to generative AI – a differentiator from OpenAI’s ChatGPT, or Google’s Gemini.
The company says the feature is intended to offer “inspiration and hacks” to users, and stresses that all sharing is an opt in.
Yet, a review of publicly posted chats by multiple media outlets reveals examples of users uploading veterinary bills containing home addresses, legal correspondence, school disciplinary forms, and medical details – all accessible in seconds and often linked to Instagram handles.
Lack of transparency and consent
The risk is not limited to accidental posts. Meta’s integration of AI across Facebook, Instagram and WhatsApp means that users may not distinguish between protected messaging environments (like end to end encrypted WhatsApp chats) and the open visibility of Meta AI.
The Big Tech has said that AI queries are not subject to encryption, even if submitted via WhatsApp’s front-end.
While Meta has not been accused of breaching any US or UK privacy laws, legal experts say the app’s structure could trigger scrutiny under existing transparency and consent requirements.
Meta’s most recent public guidance insists that users are “in control” and can delete posts after publication. But analysts say post hoc deletion does not mitigate reputational damage or secondary misuse of data, particularly when images or prompts have already been indexed, or screenshotted.
The company’s own AI assistant recently responded to a query about user data exposure by saying: “Meta provides tools and resources to help users manage their privacy, but it’s an ongoing challenge”. The response appeared in the public feed.
Meta’s AI division is now central to its strategy to retain users and advertising revenue in a rapidly commoditising social media landscape.
Chief executive tech behemoth Mark Zuckerberg recently disclosed that Meta AI had reached one billion interactions across its products.
That scale may bring commercial benefits, but it also intensifies regulatory exposure – particularly as lawmakers seek to bring AI under clearer legal oversight.