Introduction
As closed-source cloud models can cut off your account at any time, the value of local open-source models has increased.
On April 15, a user posted on the tech community Hacker News that the American AI company Anthropic has updated its user policy for the closed-source model Claude, introducing an identity verification mechanism that has raised widespread privacy concerns in the community.
According to an article in Claude’s official help center, in certain cases, Claude will trigger this identity verification, which will be conducted through a third-party service provider, Persona. During this process, users must submit original government-issued IDs, passports, driver’s licenses, and undergo “real-time facial recognition” to confirm their identity.
Claude’s official statement clearly specifies that copies are not accepted.
Anthropic states that this measure aims to prevent abuse and enforce usage policies. However, identity verification is not mandatory for all users and is only triggered in specific scenarios, such as accessing certain advanced features, conducting platform integrity checks, or other security and compliance measures. Additionally, Anthropic claims that users’ personal data will not be used for model training and is not stored in Anthropic’s systems, but rather handled by Persona.
Despite this, many users in the community have expressed strong concerns about privacy risks, data sharing, and policy transparency, particularly questioning Persona’s reliability.
Privacy Concerns Under Strict Regulation
Some users have noted that Persona may distribute data to as many as 17 subprocessors, which far exceeds the direct trust relationship between users and Anthropic. Moreover, Persona has been involved in serious data breaches in the past while being used in platforms like LinkedIn and Discord, leading to the sale of over a billion pieces of personal information by hackers. The combination of real-time selfies and ID could create a powerful biometric database, raising concerns about potential misuse in the future.
On the other hand, Anthropic has not clearly stated “which specific scenarios” will trigger verification, which is particularly unsettling. “The most critical question remains unanswered,” users pointed out.
Some speculate that Anthropic aims to block users from certain countries, such as China, North Korea, Russia, and Iran, from accessing Claude through proxies to avoid legal risks in the U.S. Others suspect there may be identity harvesting, believing that this is a way for Anthropic to collect personal information for potential future account bans.
Even after completing user verification, accounts may still be banned for being in “unsupported regions.”
Notably, one user shared a personal story about their 15-year-old son’s paid account being suspended due to a “detected signal of child usage,” requiring ID verification to prove he is over 18 to restore access. This led to discussions about Claude becoming “18+ only.”
However, Anthropic’s actions are not isolated. In recent years, AI companies have faced increasing regulatory pressure, including the EU AI Act, various U.S. state data privacy laws, and concerns regarding child protection, content abuse, and national security risks. But today, Anthropic only mentions certain scenarios; will this expand tomorrow? The uncertainty remains.
Balancing Privacy and Convenience
In the long run, requiring real-name verification can indeed help platforms enforce usage policies more effectively, likely accelerating the stratification of the AI ecosystem. For ordinary users, this may mean higher barriers to accessing advanced features of Claude. For enterprise users, compliance costs may rise, pushing organizations to turn more towards open-source alternatives. As one user mentioned, “It’s time to seriously consider building our own local models.”
In discussions, some users directly suggested Alibaba’s Qwen3.5 as a preferred alternative, citing its excellent performance in code understanding, reasoning, and multi-file editing. “It can run locally on reasonable hardware, performing close to Claude Sonnet by mid-2025, though it may be a bit slower.” Local models are rapidly catching up to closed-source cloud models, making them a reliable backup option.
That said, while local deployment may have performance and accessibility gaps compared to cloud solutions, privacy protection and stability are the core driving forces behind local deployment. Previously, ordinary users did not need to be particularly concerned about cloud privacy issues, but a developer lamented that the current cloud service model—“first requiring your personal information, then deciding whether to ban your account”—is unsettling. Local models eliminate this “trust chain” vulnerability entirely.
Cost control for local deployment is also a significant consideration. Although Claude’s subscription plan offers a large number of tokens, long-term use can be expensive and comes with sudden limit risks. In contrast, running open-source models locally requires only a one-time hardware investment, with almost zero subsequent inference costs. As the ecosystem of technological tools matures, the barriers to local deployment will further decrease. “There is indeed a learning curve, but once you get it running, you won’t go back,” has become a common sentiment among many users.
Today, on one side, we have strictly regulated, real-name verified, potentially privacy-compromising cloud models that can ban accounts at any time. On the other side, there are decentralized, locally run open-source models that, while slower, offer stability and security advantages. Developers seeking safety and stability will undoubtedly choose the latter. However, ultimately, the two are not in zero-sum competition; for developers, enterprises, and users, the optimal strategy remains a flexible combination based on scenarios. After all, walking on two legs is the way to go far.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.