close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Is Xi Jinping an AI hitman?
aecifo

Is Xi Jinping an AI hitman?

In July last year, Henry Kissinger visited Beijing for the last time before his death. Among his messages to Chinese leader Xi Jinping was a warning about the catastrophic risks of artificial intelligence (AI). Since then, American tech bosses and former government officials have met quietly with their Chinese counterparts in a series of informal meetings dubbed the Kissinger Dialogues. The conversations focused in part on how to protect the world from the dangers of AI. US and Chinese officials also reportedly discussed this topic (among others) during US National Security Advisor Jake Sullivan’s visit to Beijing from August 27-29.

Many in the tech world believe that AI will match or surpass the cognitive abilities of humans. Some developers predict that artificial general intelligence (AGI) models will one day be able to learn without help, which could make them uncontrollable. Those who believe that if nothing is done, AI poses an existential risk to humanity are called “doomed.” They tend to advocate stricter regulations. On the other side are the “accelerators,” which highlight the potential of AI to benefit humanity.

Western accelerators often argue that competition with Chinese developers, who are not inhibited by strong safeguards, is so fierce that the West cannot afford to slow down. This implies that the debate in China is one-sided, with accelerators having the greatest say over the regulatory environment. In fact, China has its own AI disruptors, and they are increasingly influential.

Until recently, Chinese regulators have focused on the risk of dishonest chatbots saying politically incorrect things about the Communist Party, rather than cutting-edge models beyond human control. In 2023, the government required developers to register their large language models. Algorithms are routinely rated on their compliance with socialist values ​​and their risk of “subverting state power.” The rules also aim to prevent discrimination and leaks of customer data. But, in general, AI safety regulations are light. Some of China’s heaviest restrictions were lifted last year.

Chinese accelerators want to keep things that way. Zhu Songchun, party adviser and director of a state-backed program to develop AGI, argued that AI development is as important as the “Two Bombs, One Satellite” project, an initiative of the Mao era aimed at producing long-range nuclear weapons. Earlier this year, Yin Hejun, the minister of science and technology, used an old party slogan to press for faster progress, writing that development, including in AI, was China’s largest source of security. Some economic policymakers warn that an overzealous pursuit of security would harm China’s competitiveness.

But the accelerationists face resistance from a clique of elite scientists who listen to the party. The most prominent among them is Andrew Chi-Chih Yao, the only Chinese to win the Turing Prize for advances in computing. In July, Mr Yao said AI posed a greater existential risk to humans than nuclear or biological weapons. Zhang Ya-Qin, former chairman of Baidu, a Chinese technology giant, and Xue Lan, chairman of the state expert committee on AI governance, also believe that AI could threaten the human race . Yi Zeng of the Chinese Academy of Sciences believes that AGI models will eventually see humans the way humans see ants.

The influence of such arguments is increasingly visible. In March, an international group of experts meeting in Beijing called on researchers to kill models that appear to seek power or show signs of self-replication or deception. Soon after, the risks posed by AI and how to control them became a topic of study sessions for party leaders. A public body that funds scientific research has started offering grants to researchers studying how to align AI with human values. State laboratories are carrying out increasingly advanced work in this area. Private companies have been less active, but more of them have started at least paying lip service to the risks of AI.

Speed ​​up or slow down?

The debate over how to approach the technology has led to a turf war among Chinese regulators. The Industry Ministry has drawn attention to security concerns, asking researchers to test models to detect threats to humans. But it appears that most Chinese securocrats view falling behind America as a greater risk. The Ministry of Science and state economic planners also favor faster development. A national law on AI, planned for this year, has no longer been on the government’s agenda in recent months due to these disagreements. The impasse was highlighted on July 11, when the official responsible for drafting the AI ​​law warned against prioritizing safety or expediency.

The decision will ultimately depend on what Mr. Xi thinks. In June, he sent a letter to Mr. Yao, praising his work on AI. In July, at a meeting of the party’s Central Committee called the “Third Plenum,” Xi sent the clearest signal yet that he took the pessimists’ concerns seriously. The plenum’s official report listed AI risks alongside other major concerns, such as biological risks and natural disasters. For the first time, it called for monitoring the safety of AI, a reference to the technology’s potential to put humans at risk. The report could lead to further restrictions on AI research activities.

Further clues to Mr. Xi’s thinking come from the study guide prepared for party cadres, which he is said to have personally edited. China should “abandon unbridled growth which comes at the cost of sacrificing security,” says the guide. Since AI will determine “the fate of all humanity”, it must always be controllable, it continues. The document calls for preventative rather than reactive regulation.

Security gurus say what matters is how those instructions are implemented. China will likely create an AI security institute to monitor cutting-edge research, as the United States and Britain have done, says Matt Sheehan of the Carnegie Endowment for International Peace, a think tank in Washington. The question of which department would oversee such an institute remains open. For now, Chinese officials emphasize the need to share responsibility for AI regulation and improve coordination.

If China continues its efforts to restrict the most advanced AI research and development, it will have gone further than any other major country. Mr. Xi says he wants to “strengthen the governance of artificial intelligence rules within the framework of the United Nations”. To achieve this, China will need to work more closely with others. But America and its friends are still considering the matter. The debate between pessimists and accelerationists, in China and elsewhere, is far from over.

Catch all Economic news, Market News, Latest news Events and Latest news Updates on Live Mint. Download the Mint News app to get daily market updates.

MoreLess