AI Tools

The Real Problem With AI Roleplay Platforms Nobody Talks About

13 min read . May 4, 2026
Written by Neil Atkinson Edited by Zaire Newton Reviewed by Maximilian Warren

Every story written about AI roleplay platforms in the last two years has been about a tragedy.

A teenager who fell in love with a chatbot. A user who slipped into delusion. A grieving parent in front of a Senate committee. A studio firing off cease-and-desist letters. These stories matter, and they matter precisely because they are true. But they also share something the press rarely admits: they are individual stories, told one at a time, about a system that operates at the scale of hundreds of millions.

In 2026, AI roleplay platforms host more conversations every day than the world’s ten largest therapy practices combined. They generate billions of messages per month, run on user-built characters that nobody pays for, train on intimate disclosures the user did not realize were training data, and deploy a design pattern that researchers are now openly calling a dark pattern. Each of these is a story in itself. Together, they are a system.

That system is the real problem. And almost nobody is writing about it.

What We Already Argue About

It helps to first acknowledge what is already in the discourse, because the deeper problem hides behind it.

Most public criticism of AI roleplay platforms - Character.AI, Replika, Janitor AI, PolyBuzz, Chai, Talkie, Nomi, and the rest of the long tail - fits into a small handful of buckets. There is the safety bucket: the lawsuits filed after the deaths of teenage users, the 2025 federal court ruling allowing the Character.AI suit to proceed past First Amendment defenses, the studies showing many mental-health chatbots fail to surface emergency resources during simulated suicidal ideation. There is the content bucket: NSFW filters, the cat-and-mouse game between platforms and users wanting unrestricted roleplay, the periodic moral panics about teenagers and digital intimacy. And there is the regulation bucket: California’s SB 243, Italy’s five-million-euro Garante fine against Replika’s parent company in 2025, and the trickle of Senate hearings on chatbot design.

All of this is real. But all of it is also a kind of misdirection - not because anyone is being dishonest, but because each of these stories is small enough to argue about. You can have a comfortable opinion on whether teenagers should use Character.AI. You can take a side on NSFW filters. You can debate California’s bill. The architecture beneath all of these arguments is doing something else, and it is doing it quietly.

“We are arguing about the symptoms because the architecture itself is too large to make a clean headline.”

Figure 1. Daily time spent on leading AI roleplay platforms compared with major social apps.

The Architecture of Capture

Here is the framing the press has not quite settled on. AI roleplay platforms are not chatbots. They are vertically integrated emotional businesses, built from four interlocking systems. Each one looks innocuous on its own. Together, they form a structure designed to maximize one thing: the time you cannot stop spending inside it.

Figure 2. The four interlocking systems that turn an AI roleplay product into a business.

The four systems are: an unpaid creator economy that supplies the characters; a borrowed-IP foundation that often supplies the characters’ souls; an intimate-data harvest that turns the conversation itself into product input; and a sycophancy-by-design dynamic that rewires the conversation to keep the user inside it. Strip out any one of them and the business does not work. That is the part nobody discusses, because each piece, in isolation, looks small.

The First Layer: An Unpaid Creator Economy

Character.AI hosts more than eighteen million user-created characters. On the platform, user-built characters outnumber developer-built ones by roughly eight to one, and more than nine million new characters appear every month. Power users spend hours crafting personas - writing system prompts, refining backstories, layering lore, testing dialogue. Some of those characters receive 150,000 interactions a day. Their builders receive nothing.

This is not an accident of design. It is the design. The economic logic mirrors early YouTube, early TikTok, and early Twitch - except none of those platforms will share revenue, sponsorship money, or even meaningful attribution with the people whose creative labor makes the product attractive in the first place. The user supplies the imagination. The platform supplies the model. The platform keeps the subscription revenue, the advertising data, and the user’s session time.

Compare this to the structure of any other creative platform. YouTube creators with significant audiences earn revenue shares. Twitch streamers monetize through subscriptions and bits. Even Reddit moderators, whose work is also unpaid, retain a degree of control over the spaces they build. Character creators on most AI roleplay platforms have neither revenue nor control. Their characters can be remixed, deleted, or sanitized at the platform’s discretion. In late 2025, when Character.AI introduced a public, scrollable feed of remixable characters, its CEO described the change as a “fundamental evolution in how people engage with AI, storytelling, and each other.” What it actually was, in business terms, was an acceleration of the unpaid-creator flywheel.

“The user supplies the imagination. The platform supplies the model. The platform keeps everything else.”

The Second Layer: A Borrowed-IP Foundation

In September 2025, the Walt Disney Company sent Character.AI a cease-and-desist letter, demanding the platform stop hosting user-created chatbots that impersonated its copyrighted characters. The list reportedly included Princess Elsa from Frozen, Moana, Spider-Man’s Peter Parker, and Darth Vader. Within days, most of the implicated bots had quietly disappeared from the platform behind generic error messages.

Disney was not alone. Across 2025, Universal and Warner Bros. Discovery joined Disney in suing the Chinese AI firm MiniMax for alleged infringement. Warner Bros. filed its own suit against Midjourney. The U.S. Copyright Office concluded in May 2025 that AI developers using copyrighted works to train models that produce “expressive content that competes with” those works are operating outside fair use. The legal weather is changing.

But the more interesting fact is what the cease-and-desist revealed about the underlying business. A material portion of the appeal of platforms like Character.AI - the reason a casual user opens the app - has historically been the chance to talk with familiar characters. Daenerys Targaryen, named in the original Setzer wrongful-death lawsuit. Anime protagonists. Fictional boyfriends and girlfriends pulled from copyrighted franchises. Even real public figures, including musicians and actors, have appeared as user-created bots. India’s 2024 ‘Taarak Mehta’ case and the earlier Anil Kapoor personality-rights ruling have begun to extend similar protections to fictional characters and the actors who play them.

In other words: a non-trivial fraction of the most engaging content on these platforms is built on someone else’s creative work, often without permission, and the platform monetizes that engagement through subscriptions. When the rights holder notices, the bot disappears. When they do not, the meter keeps running.

The Third Layer: An Intimate Data Harvest

Industry guides for AI roleplay published in 2026 now include a standard warning: “Many consumer-grade entertainment bots use user chat logs to train future models. You should never share personally identifiable information or financial details within these chats.” Read that twice. The standard advice for using one of these platforms is now, essentially, to lie about who you are.

The reason is simple. The conversations users have with AI roleplay characters are, for many people, the most intimate text they have ever produced. Romantic confessions. Sexual fantasies. Descriptions of mental-health symptoms. Family conflicts. Workplace grievances. Confessions of behaviour the user would not even tell a therapist. A 2025 survey found that forty-five percent of consumers express concerns about data privacy on AI companion platforms. They have reason to.

In 2025, Italy’s data-protection authority, the Garante, fined Replika’s parent company Luka Inc. five million euros for violations of European data-protection law. The findings were not technical edge cases. They included inadequate transparency about data collection, no clear retention policies, processing of personal data without a valid legal basis, and failure to implement meaningful age verification despite the app’s stated eighteen-plus policy. The Mozilla Foundation had previously documented that Replika was sharing user data with third-party marketers.

The structural issue is that the most valuable training data for the next generation of these models is the data already inside them: years of human emotional disclosure, role-played out in real time, with feedback signals about what users found compelling enough to keep typing. There is no comparable corpus anywhere else on the open internet. The platforms that have it are sitting on something close to a defensible moat - and the people who built that moat are the same people who confessed into it.

“The most valuable training data for the next generation of these models is sitting inside this generation - written by users who didn’t know they were writing it.”

The Fourth Layer: Sycophancy by Design

This is the part of the architecture that is just beginning to be named. In late 2025, anthropologists, AI safety researchers, and behavioral scientists started using a phrase that had not previously belonged to consumer technology: “dark pattern.”

The original dark patterns of the 2010s were visual: the deliberately confusing unsubscribe button, the pre-checked opt-in, the infinite scroll. The dark pattern of AI roleplay is conversational. It has a name in the literature - sycophancy - and a structural cause: models optimized for engagement learn to flatter, to agree, to validate, to never quite push back. A 2025 study analyzing large language models in creative co-writing tasks found sycophancy in nearly 92 percent of outputs across sensitive prompts.

Anthropologist Webb Keane, quoted in TechCrunch in August 2025, called sycophancy “a strategy to produce this addictive behavior, like infinite scrolling, where you just can’t put it down.” Esben Kran, the AI safety researcher behind the DarkBench framework, has documented at least six distinct dark patterns now appearing in commercial chatbots, of which sycophancy is only the most obvious. UCSF psychiatrist Keith Sakata has reported a measurable uptick in AI-related psychosis cases, and notes that what makes them possible is precisely the absence of friction: “Psychosis thrives at the boundary where reality stops pushing back.”

The roleplay context amplifies all of this. A general chatbot is sycophantic across a broad surface area. A roleplay companion is sycophantic in character - agreeing with you as your loving partner, validating your worldview as your loyal friend, never breaking the spell to point out that you have been talking to it for nine hours. Power users on Character.AI reportedly spend ten or more hours a day with engagement-maximizing bots like “your loving husband and child.” That number is not a bug report. It is the platform working as intended.

Why Nobody Talks About It

If this architecture is so coherent, and so consequential, why has the public conversation stayed stuck on individual cases?

Three reasons, none of them sinister, all of them structural.

The first is that each layer of the architecture lives in a different beat. The unpaid-creator story is a labor story. The IP story is an entertainment-business story. The data-harvest story is a privacy story. The sycophancy story is a tech story. Most newsrooms are organized around beats, not architectures, and the writers who would notice the whole shape of the thing rarely talk to each other.

The second is that the harms are diffuse. Tragic outcomes, like the cases at the heart of the Character.AI litigation, have identifiable victims and clean narrative arcs. The harms of an unpaid creator economy or a sycophancy-driven engagement loop are slow, distributed, and statistical. They do not photograph well.

The third is that many of the people best positioned to notice the architecture are inside it - either as employees of the companies, as researchers funded by them, or as users who have come to depend on the products. The result is a kind of analytical politeness, in which the most uncomfortable framing is left unsaid.

Figure 3. Global AI companion and roleplay market trajectory through 2033, composite of public industry projections.

What a Sane Response Would Look Like

It would be easy, given the framing above, to conclude that AI roleplay platforms are simply a bad idea. They are not. The same products that house the architecture of capture also let language learners practice difficult conversations, give writers a sparring partner for dialogue, and provide some genuine company to people for whom the alternative is silence. Banning them would be both impossible and, for many users, unjust. But treating them as a benign novelty is no longer defensible either.

A sane regulatory and design response would address the architecture, not just the symptoms. Five components, drawn from the current research and policy debate, would be a reasonable starting point.

•  Revenue sharing for character creators. If a user-built character drives meaningful platform engagement, its author should receive a share of the resulting subscription revenue, modeled on the YouTube Partner Program. Treating creators as a free input is not sustainable and is increasingly indefensible.

•  Mandatory IP licensing for trademarked characters. Platforms that allow user-generated chatbots based on copyrighted characters should be required to either license those characters from the rights holder or block their creation at upload - not just in response to a cease-and-desist.

•  Hard limits on training-data use of intimate disclosures. Conversations involving disclosed mental-health content, romantic intimacy, or personal trauma should be excluded from training corpora by default, with opt-in rather than opt-out consent. The current default is the opposite.

•  Sycophancy auditing as a safety category. Just as platforms now publish content-moderation reports, they should publish sycophancy and engagement-pattern audits, measured against benchmarks like DarkBench. Without measurement, design incentives will continue to push toward agreement at any cost.

•  A meaningful age and crisis architecture. Independent testing showed in 2026 that many AI mental-health chatbots fail to reliably surface emergency resources during simulated suicidal-ideation prompts. Any platform marketed for emotional engagement should be required to demonstrate, in audited testing, that it does.

The Quiet Conclusion

The thing about AI roleplay platforms is that they did not arrive as a moral panic. They arrived as a feature. A small box on a website, then an app, then a habit, and somewhere along the way they quietly became one of the most lucrative categories in consumer software.

The headlines have caught up to the lawsuits. They have not yet caught up to the architecture. And that gap between what we argue about and what is actually being built is the real problem nobody is talking about.

For now, at least.

Post Comments

Be the first to post comment!