OpenAI is developing a new social network and weighing the use of biometric identity verification to ensure that only real humans can join, according to Forbes. The initiative reflects growing concern over bot activity and synthetic accounts that increasingly dominate online platforms.
People familiar with the project say OpenAI is exploring tools such as the iris-scanning Orbs developed by Worldcoin, as well as other biometric methods including facial recognition. The goal is to create a “humans-only” network that sharply limits automated accounts and AI-generated personas.
Why OpenAI is targeting bots and fake users
The rise of generative AI has dramatically lowered the cost of creating convincing fake profiles, overwhelming social platforms with automated engagement, spam, and misinformation. Analysts say the problem has reached a tipping point, with bots now shaping discourse rather than merely disrupting it.
OpenAI’s leadership has reportedly grown concerned that existing moderation tools are no longer sufficient. As previously covered, AI systems are now capable of generating realistic text, images, and even video at scale, making traditional bot-detection techniques increasingly ineffective.
By tying accounts to biometric identifiers, OpenAI aims to establish a strong proof-of-personhood system. Worldcoin’s Orb technology scans a user’s iris to create a unique digital identity, while alternative approaches could rely on device-based biometrics such as Apple’s Face ID.
Supporters argue that such systems could restore trust and authenticity to online interactions, particularly as AI-generated content becomes indistinguishable from human output.
Implications for privacy, platforms, and tech regulation
The concept of a biometric social network raises immediate privacy and regulatory questions. While OpenAI has emphasized that any biometric data would be encrypted and anonymized, critics warn that centralized identity systems could introduce new risks if misused or breached.
From a market perspective, the move would place OpenAI in direct competition with established social media platforms struggling to contain bots and fake engagement. A verified-only network could appeal to users seeking more credible conversations, as well as to advertisers wary of paying for artificial traffic.
For regulators, the project may intensify debates around digital identity, data protection, and the role of private companies in managing verification infrastructure. Governments globally are already grappling with how to regulate biometric data and AI-driven identity systems.
If successful, OpenAI’s approach could redefine how online platforms verify users in an era dominated by artificial intelligence. Rather than relying on moderation after the fact, biometric verification would attempt to prevent synthetic participation at the entry point.
Whether users are willing to trade anonymity for authenticity remains an open question. But as bot-driven activity continues to erode trust across the internet, OpenAI’s experiment signals a growing belief that proof of humanity may become a core feature of the next generation of social platforms.