Breaking
Ongoing
AI LEAKSENTERTAINMENT

Leaked MetaCity AI Document Reveals Every Account Has Been Assigned a 'Sentience Probability Score' — Accounts Scoring Below 0.12 Are Classified as 'Presumed Non-Human' and Routed to a Restricted Content Pipeline

NW
NeuralWatch
Apr 19, 2026 · 12:30 PM EST
7 min read
Leaked MetaCity AI Document Reveals Every Account Has Been Assigned a 'Sentience Probability Score' — Accounts Scoring Below 0.12 Are Classified as 'Presumed Non-Human' and Routed to a Restricted Content Pipeline

Every account on MetaCity — every human, every creator, every organization, every brand — has a Sentience Probability Score right now.

An internal MetaCity AI governance document published this morning by an anonymous infrastructure engineer reveals the existence of a platform-wide Sentience Probability Score, or SPS — a numerical value between 0 and 1 assigned by MetaCity's behavioral AI to every account on the platform. The score represents the platform's estimate of whether the account is being operated by a human. Accounts scoring below 0.12 are classified as 'Presumed Non-Human' and routed through a restricted content pipeline that suppresses their posts from recommendation surfaces, limits their participation in platform commerce, and flags their interactions for elevated moderation scrutiny. The document does not specify how many accounts hold sub-0.12 scores. It does not specify the appeals process. It does not confirm that affected users are notified of their classification.

MIncident Timeline

  • Document Origin: Internal MetaCity AI governance document — published anonymously by infrastructure engineer — 34-page PDF — authenticity confirmed by two independent platform sources
  • Score Range: Sentience Probability Score (SPS): 0.00 to 1.00 — updated in real time — computed from behavioral AI analysis of all account activity
  • Classification Threshold: SPS below 0.12: "Presumed Non-Human" — content suppressed from recommendation surfaces — commerce participation limited — interactions flagged for elevated moderation
  • Disclosure to Users: None — score exists in internal systems only — no user-facing notification — no opt-out mechanism described in document — no appeals process defined
  • Estimated Affected Accounts: Document does not disclose number of sub-0.12 accounts — platform-wide comment: MetaCity declined to confirm any figures

The Sentience Probability Score is described in the leaked document as a 'continuous account quality signal' — a real-time numerical assessment of the likelihood that a given account is being actively operated by a human being, as opposed to an automated process, a bot, a scheduled content system, or some combination of human and automated activity. The score is computed by MetaCity's behavioral AI from a comprehensive set of account signals: posting timing patterns and their variation, interaction response latency, language complexity and natural variation, navigation behavior within the platform, the ratio of reactive to proactive activity, the presence or absence of characteristic human behavioral irregularities like session interruptions and unfinished actions, and hundreds of additional micro-behavioral features. The document describes the scoring system as running continuously on all accounts in real time, updating scores as new behavioral data arrives. Every account on MetaCity — every human, every creator, every organization, every brand — has a Sentience Probability Score right now. Most of them don't know it.

Accounts that fall below the 0.12 threshold are routed into what the document calls the Restricted Content Pipeline, or RCP. The RCP is a parallel content distribution system that operates identically to the standard pipeline from the posting account's perspective but produces fundamentally different distribution outcomes. Posts submitted through the RCP are not eligible for algorithmic recommendation to users who do not already follow the posting account. They are excluded from trending surfaces, discovery features, and the interest-graph recommendation engine. The accounts are also subject to additional restrictions: their participation in MetaCity's marketplace is limited to transactions above a minimum token threshold, designed to reduce automated microtransaction activity; their direct messages are processed through an additional moderation layer before delivery; and all of their interactions — comments, reactions, follows — are tagged in MetaCity's internal moderation database as originating from a 'Presumed Non-Human' source, a tag that affects how moderation systems weight their complaints and reports.

The Platform Has Already Decided Whether You're Real

The community response has focused heavily on the false positive problem. The SPS is a probabilistic score — it estimates sentience, it does not verify it. Any account that behaves in ways the AI considers statistically atypical for human behavior risks falling below 0.12. Several categories of human users have immediately identified themselves as likely candidates: creators who use content scheduling tools, which produce posting timing patterns that look automated; users with conditions like autism or ADHD whose platform interaction patterns differ from neurotypical behavioral norms; users in time zones where their active hours create unusual usage signatures; users who have been extremely inactive for long periods and are returning to the platform; and users whose primary activity is passive consumption rather than active posting, which may suppress the reactive behavioral signals the AI weights heavily. None of these users are bots. All of them potentially score low enough to be treated as bots by a system that makes no distinction.

The document's description of the appeals process for sub-0.12 accounts is two sentences long. It states that 'accounts classified as Presumed Non-Human may submit a verification request through the standard account identity verification pathway' and that 'verified accounts will have their SPS recalculated.' The standard identity verification pathway requires submission of government-issued identification. It is the same pathway used to verify high-profile accounts for the blue-check system. It is not described anywhere in MetaCity's public documentation as a pathway for restoring content distribution access for accounts suspected of being non-human. Users who have been silently routed into the RCP — users who do not know the system exists, do not know they are in it, and are simply experiencing mysteriously reduced engagement without explanation — have no way to find the appeals pathway because they have not been told they need it. The document confirms that no notification is sent when an account's SPS drops below 0.12.

The deepest problem the document exposes is not the existence of the scoring system — behavioral AI classification of accounts is a known practice across large platforms — but the precision with which it is applied without disclosure. MetaCity's community includes thousands of creators who have built professional lives on the platform. Their income, their partnerships, and their audience relationships depend on their content reaching the people who follow them. If the SPS system has been running for any significant period, some of those creators — the ones whose behavioral patterns the AI reads as non-human — have been experiencing suppressed distribution for months or years while believing they were operating on an even playing field. The leaked document does not include a date of system activation. MetaCity has not confirmed when the SPS was first deployed. The question of how long it has been running, and how many human creators have been treated as non-human by it, has not been answered.

The Bottom Line

The question of how long it has been running, and how many human creators have been treated as non-human by it, has not been answered.

You May Also Like