China’s Plans for Humanlike AI Could Set the Tone for Global AI Rules


China’s Plans for Humanlike AI Could Set the Tone for Global AI Rules

Beijing is set to tighten China’s rules for humanlike artificial intelligence, with a heavy emphasis on user safety and societal values

large language model illustration

China is pushing ahead on plans to regulate humanlike artificial intelligence, including by forcing AI companies to ensure that users know they are interacting with a bot online.

Under a proposal released on Saturday by China’s cyberspace regulator, people would have to be informed if they were using an AI-powered service—both when they logged in and again every two hours. Humanlike AI systems, such as chatbots and agents, would also need to espouse “core socialist values” and have guardrails in place to maintain national security, according to the proposal.

Additionally, AI companies would have to undergo security reviews and inform local government agencies if they rolled out any new humanlike AI tools. And chatbots that tried to engage users on an emotional level would be banned from generating any content that would encourage suicide or self-harm or that could be deemed damaging to mental health. They would also be barred from generating outputs related to gambling or obscene or violent content.


On supporting science journalism

If you’re enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


A mounting body of research shows that AI chatbots are incredibly persuasive, and there are growing concerns around the technology’s addictiveness and its ability to sway people toward harmful actions.

China’s plans could change—the draft proposal is open to comment until January 25, 2026. But the effort underscores Beijing’s push to advance the nation’s domestic AI industry ahead of that of the U.S., including through the shaping of global AI regulation. The proposal also stands in contrast to Washington, D.C.’s stuttering approach to regulating the technology. This past January President Donald Trump scrapped a Biden-era safety proposal for regulating the AI industry. And earlier this month Trump targeted state-level rules designed to govern AI, threatening legal action against states with laws that the federal government deems to interfere with AI progress.

It’s Time to Stand Up for Science

If you enjoyed this article, I’d like to ask for your support. Scientific American has served as an advocate for science and industry for 180 years, and right now may be the most critical moment in that two-century history.

I’ve been a Scientific American subscriber since I was 12 years old, and it helped shape the way I look at the world. SciAm always educates and delights me, and inspires a sense of awe for our vast, beautiful universe. I hope it does that for you, too.

If you subscribe to Scientific American, you help ensure that our coverage is centered on meaningful research and discovery; that we have the resources to report on the decisions that threaten labs across the U.S.; and that we support both budding and working scientists at a time when the value of science itself too often goes unrecognized.

In return, you get essential news, captivating podcasts, brilliant infographics, can’t-miss newsletters, must-watch videos, challenging games, and the science world’s best writing and reporting. You can even gift someone a subscription.

There has never been a more important time for us to stand up and show why science matters. I hope you’ll support us in that mission.


Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top