China releases draft rules to tighten oversight of human-like AI services

Image Credit: reuters

China has released draft regulations aimed at strengthening supervision of artificial intelligence systems designed to mimic human personalities and form emotional connections with users, highlighting Beijing’s growing focus on managing the social impact of rapidly expanding AI technology.

The proposed rules, issued on Saturday by the country’s cyberspace regulator, were opened for public consultation and target consumer-facing AI products that simulate human traits such as thinking patterns, communication styles, and emotional interaction. These systems often engage users through text, images, audio, video, or a combination of digital formats.

The move reflects China’s broader effort to guide the development of artificial intelligence while reinforcing safety, ethical standards, and user protection. Authorities have increasingly emphasized the need to balance innovation with tighter governance as AI tools become more deeply embedded in everyday digital life.

According to the draft, AI service providers would be required to remind users about the risks of excessive use and step in when signs of dependency or addiction appear. The rules place responsibility on companies to monitor how their products affect users over time, rather than limiting oversight to launch or early deployment stages.

Under the proposal, firms offering these AI services would need to take accountability across the entire product lifecycle. This includes setting up systems for algorithm review, ensuring data security, and strengthening safeguards around personal information. Providers would also be expected to conduct regular assessments to reduce risks linked to misuse or psychological harm.

A significant portion of the draft focuses on emotional and mental well-being. Developers would be required to identify user behavior patterns, analyze emotional states, and evaluate whether users are becoming overly reliant on AI interactions. If extreme emotional responses or addictive behavior are detected, companies would be expected to take appropriate steps to reduce harm, including limiting engagement or adjusting system responses.

The rules also outline clear boundaries around content generation. AI services would be prohibited from producing material that threatens national security, spreads false information, incites violence, or promotes obscene content. These restrictions align with China’s existing online content regulations and broader digital governance framework.

China’s internet watchdog, the Cyberspace Administration of China, has played a central role in shaping AI policy as the country seeks to maintain regulatory control over fast-moving technologies. More details about the regulator’s role and past initiatives can be found on the official website of the Cyberspace Administration of China.

The latest draft builds on earlier AI regulations introduced by Beijing, including rules governing generative AI services and algorithmic recommendations. Analysts say the focus on emotional interaction reflects growing concern over how human-like AI tools may influence behavior, particularly among younger users.

China’s approach contrasts with regulatory discussions in other regions, such as the European Union, where lawmakers are also debating how to manage advanced AI systems through frameworks like the EU Artificial Intelligence Act.

Facebook
Twitter
Pinterest
Reddit
Telegram