Within the privacy hazards of Meta’s latest AI application

Within the privacy hazards of Meta’s latest AI application

Meta’s foray into consumer-oriented AI has raised alarms among privacy advocates and regulators, as its latest chatbot application subtly reveals sensitive user information through a public feed—an issue recognized in multiple regions, including the UK. Introduced in April, the independent Meta AI app seamlessly connects with users’ Facebook and Instagram accounts, enabling both private and…


Meta AI's Mark Zuckerberg

Meta’s foray into consumer-oriented AI has raised alarms among privacy advocates and regulators, as its latest chatbot application subtly reveals sensitive user information through a public feed—an issue recognized in multiple regions, including the UK.

Introduced in April, the independent Meta AI app seamlessly connects with users’ Facebook and Instagram accounts, enabling both private and shareable AI discussions.

Nonetheless, the ‘discover’ tab, a prominent aspect of the platform’s design, has faced backlash after users unintentionally shared medical, legal, and financial information in a publicly viewable stream.

Even though Meta assures that chats are private by default and can only be disseminated following an opt-in process, cybersecurity experts contend that the design promotes oversharing and insufficiently emphasizes the visibility of shared content.

Frequently, these public exchanges feature real names, images, and contact information associated with users’ Meta profiles.

“There is evident potential for data protection vulnerabilities, especially regarding personally identifiable or sensitive information,” stated Calli Schroeder, senior counsel at the Electronic Privacy Information Center in Washington DC.

“Even when companies provide warnings, if the user experience is ambiguous or if defaults favor exposure, that creates a compliance risk.”

Meta, which recently allocated $14bn to AI-centered expansion through a partnership with Scale AI, is marketing the Discover feed as a social component of generative AI—a distinguishing factor from OpenAI’s ChatGPT or Google’s Gemini.

The firm asserts that the feature aims to deliver “inspiration and hacks” to its users and underscores that all sharing is opt-in.

However, an examination of publicly shared chats by various media sources highlights instances of users posting veterinary invoices with home addresses, legal documents, school discipline records, and medical information—all readily available and frequently connected to Instagram usernames.

The danger goes beyond inadvertent posts. Meta’s integration of AI across Facebook, Instagram, and WhatsApp suggests that users may struggle to distinguish between secured messaging platforms and the open visibility provided by Meta AI.

The organization has noted that AI inquiries are not encrypted, even if they are submitted through WhatsApp’s interface.

While Meta has not faced accusations of violating any US or UK privacy regulations, legal professionals indicate that the application’s framework could attract examination under current transparency and consent mandates.

Meta’s latest public statements assert that users are “in control” and have the ability to remove posts after they are published. Nevertheless, analysts maintain that post-publication deletion does not alleviate reputational harm or subsequent misuse of data, particularly when images or prompts have already been indexed or screenshotted.

The company’s AI assistant recently addressed a question regarding user data exposure by claiming: “Meta provides tools and resources to help users manage their privacy, but it’s an ongoing challenge.” This response was visible in the public feed.

Meta’s AI division is now pivotal to its strategy for maintaining users and advertising income in an increasingly commoditized social media environment.

Chief executive Mark Zuckerberg recently revealed that Meta AI had achieved one billion interactions across its platforms.

This scale may bring commercial advantages, but it also increases regulatory scrutiny, particularly as legislators seek to impose clearer legal accountability on AI.


Stories for you

  • Audion expands in DACH region with new leadership appointment

    Audion expands in DACH region with new leadership appointment

    Audion appoints Ina Börner as head of sales & market growth DACH. The move strengthens the company’s presence in Germany, Austria, and Switzerland as it builds on strong regional momentum and expands its pan-European digital audio operations.


  • Diginex buys human rights advisory firm

    Diginex buys human rights advisory firm

    Diginex completes acquisition of The Remedy Project Limited. The acquisition aligns with growing demands for human rights due diligence driven by stringent regulations. It enhances Diginex’s capabilities in human rights risk identification and remediation within global supply chains.


  • Diginex buys human rights advisory firm

    Amazon store highlights sellers’ EcoVadis ratings

    EcoVadis and Amazon launch sustainability feature on B2B marketplace. The new feature enables sellers on Amazon Business in the EU to display EcoVadis sustainability medals, addressing demand for supply chain transparency and aiding sustainable procurement amid regulatory pressures.