close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

China’s interface with artificial intelligence – Academia
aecifo

China’s interface with artificial intelligence – Academia

With intense competition between superpowers over the development and interface with artificial intelligence (AI), is China prioritizing engagements that can converge with other countries? Common ground is certainly possible depending on the elements present, while unusual chasms can also jut out worryingly.

Since 2023, China has advocated for the Global AI Governance Initiative, which takes a cooperative and consensus-based approach to the development of people-centered AI. It emphasizes national sovereignty against manipulation and disinformation, while promoting mutual respect between nations. It champions the protection of personal data and the assessment and management of associated risks, supported by research aimed at transparency and predictability in AI.

The term “ethics” enters the discourse aimed at preventing discrimination, supported by the ethical review of AI development. The initiative also claims space for the voices of multiple stakeholders and the interests of developing countries. As a corollary, it is acceptable for the United Nations to play a role in establishing an international framework to govern AI, linking development, security and governance.

This initiative was reinforced in September 2024 by the publication of its AI Safety Governance Framework which more specifically defines the challenges and necessary responses. This framework is a policy instrument that can be strengthened alongside specific laws or regulations. The framework categorizes various key risks and highlights actions to address them, while also targeting different stakeholders in the AI ​​technology flow.

It lists various inherent security risks, such as those arising from models and algorithms, data and AI systems. Added to these risks are risks related to AI applications, in particular cyberspace risks, real-world risks, cognitive risks and ethical risks.

An example of the risks associated with algorithms (which are essentially technological models or numerical formulas aimed at producing various results) is that they are difficult to understand and need to be more explainable and transparent to the public. Data risks include illegal data collection and intellectual property (IP) violations. Risks related to AI systems include exploitation, whether direct or indirect.

Every Thursday

Whether you’re looking to broaden your horizons or stay informed about the latest developments, Viewpoint is the ideal source for anyone looking to address the issues that matter most.

for subscribing to our newsletter!

Please check your email for your newsletter subscription.

See more newsletters

Cyber ​​risks include cyberattacks, plus real risks, such as criminal activities. Cognitive risks are shaped by monofocal (rather than plural) information that limits the potential for broad analysis on the part of the user, thus leading to a “cocoon” effect, while ethical risks include discrimination and deviation growing in information know-how.