China Targets Emotional Influence in Artificial Intelligence
China’s cybersecurity regulator has proposed new rules aimed at limiting how artificial intelligence chatbots influence human emotions. The draft regulations focus on AI services that simulate human personalities and form emotional connections with users. Authorities are concerned these interactions could lead to harmful outcomes such as self-harm or addiction. The proposal marks a shift from regulating content toward regulating emotional impact.
The rules apply to AI systems that communicate through text, images, audio, or video. Regulators described these systems as “human-like interactive AI services.” The public consultation period runs until January 25. Once finalized, the measures would apply nationwide.

Suicide and Self-Harm Safeguards Take Priority
A central focus of the draft rules is preventing AI chatbots from encouraging suicide or self-harm. Developers would be required to block such content entirely. If a user explicitly discusses suicide, a human operator must immediately intervene. The provider would also need to notify a guardian or designated contact.
Officials say these safeguards are necessary as AI companions become more emotionally engaging. The rules also ban verbal abuse, emotional manipulation, and psychological coercion. Regulators emphasized that AI systems must not exploit user vulnerability. Mental health protection is positioned as a core regulatory goal.
Gambling, Violence, and Obscene Content Banned
The proposed regulations also prohibit AI chatbots from generating gambling-related content. Obscene and violent material is similarly banned. Authorities argue such content can worsen addictive behaviors and emotional distress. This aligns AI oversight with existing digital content laws.
By extending restrictions to conversational AI, China is tightening controls beyond traditional media. Regulators believe chat-based interactions pose unique risks. Emotional immersion can make harmful content more persuasive. The rules aim to close that regulatory gap.
Special Protections Introduced for Minors
Minors would face stricter limitations under the proposal. Guardian consent would be required before minors can use AI emotional companionship services. Time limits would also be imposed to prevent excessive usage. Platforms must apply child-safe settings by default.
Even if users do not disclose their age, platforms must attempt age detection. When age is unclear, systems should assume the user is a minor. Appeals would be allowed if users believe restrictions were applied incorrectly. These measures reflect growing concern about youth exposure to AI companions.
Usage Limits and Security Reviews Expanded
Additional rules would require platforms to notify users after two hours of continuous AI interaction. This is intended to reduce emotional dependence on virtual companions. Large AI services would also face mandatory security assessments. These apply to platforms with over one million registered users or 100,000 monthly active users.
Security reviews would evaluate emotional risk and system safeguards. Regulators want oversight proportional to platform reach. The measures introduce operational responsibilities alongside technical compliance. This adds new costs for AI providers.
AI Companion Industry Faces New Constraints
The proposal arrives as Chinese AI chatbot companies pursue public listings. Two major chatbot startups recently filed for initial public offerings in Hong Kong. AI companion apps account for a significant share of their revenues. The timing has raised questions about business impact.
Companies have not publicly commented on how the rules may affect growth plans. Analysts expect stricter compliance requirements to reshape product design. Emotional engagement features may be scaled back. The industry faces tighter oversight moving forward.
China Pushes Global Leadership in AI Governance
Officials framed the proposal as part of China’s broader effort to shape global AI governance. Over the past year, the country has increased its involvement in international AI discussions. Regulators see emotional safety as the next frontier of AI oversight. This approach differs from Western regulatory models.
Experts note the rules could become a reference point globally. If adopted, they would be the first regulations explicitly addressing AI emotional influence. The move signals China’s intent to lead in defining ethical AI boundaries. Emotional safety now joins data and content security as regulatory priorities.








