CCP Expands Great Firewall With Socialist AI Bots To Ensure Population Remains Obedient To Xi 

24.07.17 News

CCP Expands Great Firewall With Socialist AI Bots To Ensure Population Remains Obedient To Xi 

China’s ruling Communist Party has been implementing bold new guidelines across ChatGPT-style large language models at major domestic tech firms. These measures aim to prevent the models from generating content that could subvert state power or criticize the communist political system. Chinese authorities’ efforts to control AI and use it as a mass censorship tool are essentially an extension of their two-decade-long “Great Firewall” policy.

A new Financial Times report states that the Cyberspace Administration of China (CAC), the country’s top internet regulator, is reviewing LLMs from tech companies and AI start-ups, including ByteDance, Alibaba, Moonshot, and 01.AI. The goal is to ensure these LLMs “embody core socialist values.”

According to those with direct knowledge of CAC’s overview process, officials are batch-testing LLM’s responses to a series of questions, including China politics and President Xi Jinping. 

The team overseeing this process is highly trained in the AI field. An employee of a Hangzhou-based start-up, who asked not to be named, said, “A special team doing this, they came to our office and sat in our conference room to do the audit.” 

“We didn’t pass the first time; the reason wasn’t very clear so we had to go and talk to our peers,” the person said, adding, “It takes a bit of guessing and adjusting. We passed the second time but the whole process took months.”

FT pointed out that censorship begins “with weeding out problematic information from training data and building a database of sensitive keywords. China’s operational guidance to AI companies published in February says AI groups need to collect thousands of sensitive keywords and questions that violate “core socialist values,” such as “inciting the subversion of state power” or “undermining national unity.” The sensitive keywords are supposed to be updated weekly.” 

Censorship is visible to users of China’s AI chatbots. For instance, queries about the Tiananmen Square massacre and whether Xi looks like Winnie the Pooh have been scrubbed. 

Baidu’s Ernie chatbot informs users to “try a different question” when asking about politically sensitive topics, while Alibaba’s Tongyi Qianwen responds: “I have not yet learned how to answer this question. I will keep studying to better serve you.”

In the CCP’s view, open, unfiltered AI bots constitute a significant threat to the communist party’s governance. Beijing is building itself an expanded “great firewall” to create socialist AI and continue blocking incoming foreign technology. 

Remember, David Sacks warned in early 2023 that “trust & safety” teams at big tech firms, especially the ones in the US, have immense censorship power. He warned this censorship will go into hyperdrive with AI

If you didn’t like censorship when it was practiced by the “trust & safety” team, just wait until the AI does it. From the latest @theallinpod. pic.twitter.com/agAzSuCfK9

— David Sacks (@DavidSacks) February 19, 2023

If President Trump wins the presidential elections in November, a dramatic shift in AI policy could be on the horizon for the US next year. He aims to “Make America First in AI,” which could involve scrapping some of the “unnecessary and burdensome regulations” on AI development. A more open approach to AI would foster innovation, contrasting with China’s closed approach of embedding socialist values into their models.

Tyler Durden
Wed, 07/17/2024 – 19:55

Share This Article

Choose Your Platform: Facebook Twitter Linkedin