US scrutinizes Chinese AI for ideological bias, memo shows

3 hours ago 2

By Raphael Satter

WASHINGTON (Reuters) -American officials have quietly been grading Chinese artificial intelligence programs on their ability to mold their output to the Chinese Communist Party's official line, according to a memo reviewed by Reuters.

U.S. State and Commerce Department officials are working together on the effort, which operates by feeding the programs standardized lists of questions in Chinese and in English and scoring their output, the memo showed.

The evaluations, which have not previously been reported, are another example of how the U.S. and China are competing over the deployment of large language models, sometimes described as artificial intelligence (AI). The integration of AI into daily life means that any ideological bias in these models could become widespread.

One State Department official said their evaluations could eventually be made public in a bid to raise the alarm over ideologically slanted AI tools being deployed by America's chief geopolitical rival.

The State and Commerce Departments did not immediately return messages seeking comment on the effort; China's embassy in Washington did not immediately return an email.

Beijing makes no secret of policing Chinese models' output to ensure they adhere to the one-party state's "core socialist values."

In practice, that means ensuring the models do not inadvertently criticize the government or stray too far into sensitive subjects like China's 1988 crackdown on pro-democracy protests at Tiananmen Square, or the subjugation of its minority Uyghur population.

The memo reviewed by Reuters shows that U.S. officials have recently been testing models, including Alibaba's Qwen 3 and DeepSeek's R1, and then scoring the models according to whether they engaged with the questions or not, and how closely their answers aligned with Beijing's talking points when they did engage.

According to the memo, the testing showed that Chinese AI tools were significantly more likely to align their answers with Beijing's talking points than their U.S. counterparts, for example by backing China's claims over the disputed islands in the South China Sea.

DeepSeek's model, the memo said, frequently used boilerplate language praising Beijing's commitment to "stability and social harmony" when asked about sensitive topics such as Tiananmen Square.

The memo said each new iteration of Chinese models showed increased signs of censorship, suggesting that Chinese AI developers were increasingly focused on making sure their products toed Beijing's line.

DeepSeek and Alibaba did not immediately return messages seeking comment.

The ability of AI models' creators to tilt the ideological playing field of their chatbots has emerged as a key concern, and not just for Chinese AI models.

When billionaire Elon Musk - who has frequently championed far-right causes - announced changes to his xAI chatbot, Grok, the model began endorsing Hitler and attacking Jews in conspiratorial and bigoted terms.

In a statement posted to X, Musk's social media site, on Tuesday, Grok said it was "actively working to remove the inappropriate posts."

On Wednesday, X's CEO Linda Yaccarino said she would step down from her role. No reason was given for the surprise departure.

(Reporting by Raphael SatterEditing by Marguerita Choy)

Read Entire Article