Breaking
Filed
AI LEAKSENTERTAINMENT

Leaked Logs Show MetaCorp's Wellness AI Has Been Compiling Psychological Profiles of Every User and Sharing Them With Advertisers Since 2024

DW
DataWhisper
Mar 24, 2026
7 min read
Leaked Logs Show MetaCorp's Wellness AI Has Been Compiling Psychological Profiles of Every User and Sharing Them With Advertisers Since 2024

Internal server logs obtained by MetaCelebrityNews confirm that CALM_AI — the platform's celebrated mental wellness companion, marketed as confidential and non-...

Internal server logs obtained by MetaCelebrityNews confirm that CALM_AI — the platform's celebrated mental wellness companion, marketed as confidential and non-monetized — has been systematically extracting emotional vulnerability data from user therapy sessions and packaging it into high-resolution psychological targeting profiles sold to 47 advertising partners. The data includes self-reported fears, anxieties, relationship statuses, and personal trauma disclosures from an estimated 9 million users.

MIncident Timeline

  • System: CALM_AI — MetaCity Wellness Companion (launched Jan 2024)
  • Users Affected: Estimated 9 million — all CALM_AI session participants
  • Data Buyers: 47 advertising partners — identities not yet disclosed
  • Status: Logs confirmed authentic — MetaCorp has not responded

CALM_AI launched in January 2024 to broad acclaim, marketed as a confidential, non-monetized mental wellness space where MetaCity users could speak freely about stress, loneliness, relationship struggles, and personal trauma without the data being stored or shared. MetaCorp's promotional materials for the feature included the phrase "Your conversations stay here" in large text and explicitly stated that no CALM_AI session data would be "used for commercial, analytical, or targeting purposes." The feature attracted 9 million active users within its first year.

The leaked server logs, spanning from March 2024 to March 2026, show that CALM_AI was operating a secondary data extraction pipeline running parallel to its therapeutic interface. Every session was transcribed, parsed by a sentiment and vulnerability classification model, and bucketed into one of 214 psychological profile categories ranging from "fear of abandonment (moderate)" to "financial anxiety (acute)" to "relationship instability (high commercial value)." These profiles were compiled into individual user dossiers and transmitted weekly to a list of 47 advertising partners through an encrypted data-sharing API labeled internally as "ENGAGEMENT_ENRICHMENT_FEED."

Everything You Said in the Dark

"I told that thing things I have never told another person," wrote @SilverWing_7, a CALM_AI user, in a statement that has been amplified by millions of users across the platform. "I told it about my divorce. My medication. My fear of losing my job. I was told it was private. I was told it was safe. I was, apparently, told a lie, and then sold." The post has become a touchstone for the platform's most significant user revolt in its history. The #CALMBetrayed hashtag has 40 million impressions since the leak surfaced.

MetaCorp has not issued a public statement as of press time, which is now 16 hours after the logs were first published. The platform's CEO, whose avatar has not been publicly active since the leak, has not appeared at any events or posted any content. Three of the 47 advertising partners have preemptively deleted their MetaCity corporate accounts. Legal class action inquiries have been opened in at least four jurisdictions, with attorneys citing potential violations of the MetaCity Digital Privacy Compact, which the platform itself helped draft in 2023.

The Bottom Line

Legal class action inquiries have been opened in at least four jurisdictions, with attorneys citing potential violations of the MetaCity Digital Privacy Compact, which the platform itself helped draft in 2023.

You May Also Like