Within the privacy hazards of Meta’s latest AI application

Within the privacy hazards of Meta’s latest AI application

Meta’s foray into consumer-oriented AI has raised alarms among privacy advocates and regulators, as its latest chatbot application subtly reveals sensitive user information through a public feed—an issue recognized in multiple regions, including the UK. Introduced in April, the independent Meta AI app seamlessly connects with users’ Facebook and Instagram accounts, enabling both private and…


Meta AI's Mark Zuckerberg

Meta’s foray into consumer-oriented AI has raised alarms among privacy advocates and regulators, as its latest chatbot application subtly reveals sensitive user information through a public feed—an issue recognized in multiple regions, including the UK.

Introduced in April, the independent Meta AI app seamlessly connects with users’ Facebook and Instagram accounts, enabling both private and shareable AI discussions.

Nonetheless, the ‘discover’ tab, a prominent aspect of the platform’s design, has faced backlash after users unintentionally shared medical, legal, and financial information in a publicly viewable stream.

Even though Meta assures that chats are private by default and can only be disseminated following an opt-in process, cybersecurity experts contend that the design promotes oversharing and insufficiently emphasizes the visibility of shared content.

Frequently, these public exchanges feature real names, images, and contact information associated with users’ Meta profiles.

“There is evident potential for data protection vulnerabilities, especially regarding personally identifiable or sensitive information,” stated Calli Schroeder, senior counsel at the Electronic Privacy Information Center in Washington DC.

“Even when companies provide warnings, if the user experience is ambiguous or if defaults favor exposure, that creates a compliance risk.”

Meta, which recently allocated $14bn to AI-centered expansion through a partnership with Scale AI, is marketing the Discover feed as a social component of generative AI—a distinguishing factor from OpenAI’s ChatGPT or Google’s Gemini.

The firm asserts that the feature aims to deliver “inspiration and hacks” to its users and underscores that all sharing is opt-in.

However, an examination of publicly shared chats by various media sources highlights instances of users posting veterinary invoices with home addresses, legal documents, school discipline records, and medical information—all readily available and frequently connected to Instagram usernames.

The danger goes beyond inadvertent posts. Meta’s integration of AI across Facebook, Instagram, and WhatsApp suggests that users may struggle to distinguish between secured messaging platforms and the open visibility provided by Meta AI.

The organization has noted that AI inquiries are not encrypted, even if they are submitted through WhatsApp’s interface.

While Meta has not faced accusations of violating any US or UK privacy regulations, legal professionals indicate that the application’s framework could attract examination under current transparency and consent mandates.

Meta’s latest public statements assert that users are “in control” and have the ability to remove posts after they are published. Nevertheless, analysts maintain that post-publication deletion does not alleviate reputational harm or subsequent misuse of data, particularly when images or prompts have already been indexed or screenshotted.

The company’s AI assistant recently addressed a question regarding user data exposure by claiming: “Meta provides tools and resources to help users manage their privacy, but it’s an ongoing challenge.” This response was visible in the public feed.

Meta’s AI division is now pivotal to its strategy for maintaining users and advertising income in an increasingly commoditized social media environment.

Chief executive Mark Zuckerberg recently revealed that Meta AI had achieved one billion interactions across its platforms.

This scale may bring commercial advantages, but it also increases regulatory scrutiny, particularly as legislators seek to impose clearer legal accountability on AI.


Stories for you

  • Levi Strauss deploys renewable energy in supply chain

    Levi Strauss deploys renewable energy in supply chain

    Levi Strauss launches initiative to boost renewable energy use. The LS&Co. Energy Accelerator Program (LEAP), in partnership with Schneider Electric, aims to reduce supply chain emissions by 42% by 2030 and achieve net-zero by 2050….


  • Levi Strauss deploys renewable energy in supply chain

    Brineworks secures $8m for DAC expansion

    Brineworks secures €6.8 million funding to advance low-cost DAC technology. The Amsterdam-based startup aims to develop affordable carbon capture and clean fuel production technologies, targeting sub-$100/ton CO2 capture with its innovative electrolyzer system. The company plans to achieve commercial readiness by 2026….


  • Levi Strauss deploys renewable energy in supply chain

    DHL and Hapag-Lloyd commit to green shipping

    DHL and Hapag-Lloyd partner for sustainable marine fuel use. The new agreement aims to reduce Scope 3 emissions through sustainable marine fuels in Hapag-Lloyd’s fleet, using a book and claim mechanism that decouples decarbonisation from physical transportation….