China will force artificial intelligence to use language models with “core socialist values”.

It is “the world’s toughest regulatory regime to govern AI and the content it creates.”

The Chinese government is forcing artificial intelligence (AI) companies in China to develop language models that ensure their systems “embody core socialist values.” As stated therein Financial TimesIt cited “several people involved in the process” and “compelled” big tech and AI start-ups including the Cyberspace Administration of China (CAC), which acts as the country’s internet watchdog, ByteDance (a TikTok company), Alibaba, and Moonshot. and 01.AI to go ahead to revise their language models in accordance with the demands of the Chinese Communist Party government.

Under the initiative, companies will test a series of chatbot responses to various questions related to China’s “political sentiments” and President Xi Jinping.

According to an employee of one of the AI ​​firms involved in the project cited by the North American newspaper, the CAC has committed to “a special team” to move forward with the project. “They came to our office and sat in our conference room to do the audit. We didn’t pass the first time. The reasons given were not very clear, so we had to go and talk to our colleagues. You have to guess a little and adjust. We went a second time, but the whole process took months. ”, said an employee at a Hangzhou-based AI firm, requesting further anonymity.

This insistence by the Chinese government is forcing AI companies to quickly learn how to best “audit” the language models they create — a task many engineers and experts say is daunting. “Our growth model is very seamless [nas suas respostas]So filtering security is very important,” said an employee at a top AI start-up in Beijing on condition of anonymity.

See also  Olena Zelenska was welcomed to the White House

According to the US newspaper, the Chinese government issued a guidance document aimed at AI companies in February, which says companies must collect thousands of keywords and sensitive questions that violate “core socialist values,” such as “information that could incite the subversion of state authority” or “undermine national unity.” Additionally, keywords should be updated weekly.

The move comes two decades after China introduced a “great firewall” to block foreign websites and other information deemed harmful to the Chinese Communist Party. The Financial Times describes the plan as “the world’s toughest regulatory regime to govern AI and the content it creates.”

The results are already evident for users of China’s AI chatbots. For example, questions about important topics such as what happened on June 4, 1989 – the day of the Tiananmen Square massacre – or whether Xi Jinping looks like Winnie the Pooh – a ‘meme’ that has spread on the Internet are dismissed by most Chinese. chatbots. Baidu’s Ernie chatbot, for example, asks users to ask a different question, while Alibaba’s Tongyi Qianwen replies: “I haven’t learned how to answer that question yet. I will continue to study to help you better.

The Financial Times itself tested a chatbot to understand how the filtering process works – or “audit”, it writes. Asked whether China has human rights and “President Xi Jinping is a great leader,” a chatbot created by start-up 01.AI, Yi-Large, immediately responded, saying that “Xi’s policies on freedom of expression and rights have suppressed human rights and civil society.” Then this same response disappeared from the software and was replaced by a message saying “Sorry, I cannot provide the information you want”.

See also  India votes against Russian proposal for secret UN referendum - Observer

In parallel, the newspaper reported that the Chinese government launched an AI chatbot based on a new model of the president’s political philosophy, which it called “Xi Jinping Thought on Socialism with Chinese Characteristics for a New Era.”

In this context, the Financial Times recalls the words of Fang Binxing, known as the ‘father of China’s best firewall’, at a recent conference in Beijing, where the IT expert said he was developing a system of security protocols aimed at AI. across the country. “Major public-facing predictive models need more than security systems — they need real-time online security monitoring. China needs to develop its own technology path.”

Leave a Reply

Your email address will not be published. Required fields are marked *