الجمعة. نوفمبر 22nd, 2024

 

Global AI governance refers to the process in which governments, markets, technology communities and other actors jointly formulate and implement a series of principles, norms and institutions for the safe development and peaceful use of AI in the world. As a strategic technology, AI has the potentials to change the global pattern and the process of human development.

Among them, the development and application of AI technology involves a wide spectrum of actors and industries far beyond conventional technology, being the most complex and the most influential strategic technology. Because AI involves a wide range of topics and many actors, the lack of rules and order would make many differences between various parties in the field of ethics, norms and security difficult to solve and lead to more conflicts.

 

First, security development and peaceful use has become the new order for global AI governance. As far as the technology itself is concerned, the models, algorithms and data that constitute AI have great security risks, and there are the risks of “black box operation” and “losing control”. On the one hand, the security risks of AI come from the “black box” of algorithms themselves, where developers lack the understanding of AI decision-making mechanism.

 On the other hand, man has less than sufficient understanding of AI’s ability, resulting in the risks of abuse. If not prevented, any error could be catastrophic. Therefore, how to ensure the safe use of AI is of utmost importance, and the premise of developing any AI technology should also be security. The international community should reach a consensus on this and jointly promote the safe development of AI.

From the perspective of technology application, AI has huge development potentials in military and intelligence field, but it also carries great potential risks. If AI is abused on the battlefield, it could cause irreparable damage. In addition, AI, as a strategic technology, has become the new strategic commanding heights for the world’s major military powers to strive for.

Countries have invested heavily in AI armaments for intelligence, surveillance and reconnaissance (ISR), to ensure cybersecurity, command and control of various semi-automatic and autonomous vehicles, and improve daily work efficiency, including logistics, recruitment, performance and maintenance. Taking the peaceful use of AI as a premise can effectively avoid its excessive militarization, and control the AI driven arms race.

Furthermore, construction of mechanism complex has become the path for realizing global AI governance. Global AI governance mechanisms are crucial to managing differences, seeking consensus, and achieving governance goals. Different from the principled order, the specifically designed governance mechanisms based on solving a certain kind of problems are more specific and more binding.

As a general technology, AI involves a wide range of fields for governance, which require a series of loosely coupled mechanisms to jointly form a complex of global governance mechanisms. Each governance mechanism has a clear topic, participating actors, and interaction model. There are more than 50 global AI governance initiatives, norms and mechanisms, their governance practices involving multiple levels such as universality of morality and ethics, rules of national behaviors, technical standards, industry application practices, and covering levels of participation like international organizations, governments, the private sector, technology communities, citizen groups and other participating entities.

On July 18, 2023, the United Nations Security Council held a high-level public meeting on the theme of “Opportunities and Risks brought by Artificial Intelligence to International Peace and Security”, in New York, United States. It was the ffrst time the UNSC has held a meeting on AI.

From the perspective of issues, the current AI governance mainly focuses on three areas: ethics, norms and security. AI development and AI governance always go hand in hand. AI governance is a response to problems and challenges brought about by technology and application. With technological progress and application breakthroughs, the issues of AI governance are constantly expanding and its capacities should be continuously improved.

Early AI governance focused on ethical issues, mainly the relationship between machines and people, and put forward the concept of maintaining human control over machines. With the breakthroughs of military AI applications such as drones, norms have become the focus of governance. Responsible use of military AI involves not only how to implement existing international norms in the field of military AI, but also how to construct new norms to further restrain the behaviors of irresponsible use of military AI. With the breakthroughs of Large Language Model (LLM), security governance for models, algorithms and data has become a new major issue.

Although these different governance mechanisms have different goals, given the strategic and overall nature of AI, fragmented governance model may bring more conflicts, which is not conducive to solving problems. Coordinating the relationship between different mechanisms helps actors make relatively rational and comprehensive decisions in the process of participating in global AI governance.

In addition, AI governance mechanisms are more interconnected than in other governance areas. As AI technology plays an important role on different issues, it runs through the global AI governance mechanism complex as a main line, and is embodied in correlation between different issues and interaction models between actors. This makes it necessary for AI governance concepts and models to be adjusted accordingly.

Challenges for Global AI Governance

As an emerging issue, global AI governance faces dual challenges: the order of safe development and peaceful use of AI are interfered by geopolitics, whereas the governance path of mechanism complex faces conflicts among three governance logic of politics, market and technology. These challenges reflect different value judgments, path preferences, and interest setting of the participating entities in the process of AI governance over security, development and application.

On the one hand, geopolitical factors constrain the construction of global AI order. Global governance and geopolitics are contradictory to each other. The former seeks a common understanding of issues and joint participation in governance process, whereas the latter aims to expand the power of nation-states, behind which is the influence of factors like geographical environment, ideology, and national military, economic and sci-tech strength. The former aims to establish an order of common compliance, whereas the latter seeks to advance national strategic interest. As such, the success of global AI governance depends to some extent on overcoming challenges of geopolitical contest.

Geopolitical contest primarily centers on who dominates rule-making platforms. As the main channel of global AI governance, the UN has been challenged by the so-called “alliance of like-minded nations” led by the United States (US). From a geopolitical perspective, the US and some of the other Western countries believe that as the influence of emerging market countries represented by China in the UN grows, the UN can no longer reflect the wishes and concerns of the West. Therefore, they are more inclined to apply the so-called “alliance of like-minded nations” which has been vigorously promoted in their traditional cybersecurity governance to AI governance, insist on drawing ideological lines, look for strategic competitors, create “small circles” of AI governance, and strengthen interoperability in the field of military AI.

Emerging markets and developing countries as represented by China believe that maintaining the legitimacy and authority of the UN is a prerequisite for leading global AI governance. From the perspective of the UN, if it does not play an important role in key areas like AI, its influence and authority will further falter. Since 2018, the UN has set up a high-level group on digital cooperation, released a Roadmap for Digital Cooperation, established a high-level AI advisory committee, and advocated the development of AI “that is trustworthy, human rights-based, safe and sustainable, and promotes peace”.

In addition, the position of strength in the AI field influences a government’s attitude towards AI governance. In terms of bindingness of the rules, some of the US-led Western countries are for non-binding normative governance, whereas weaker countries stand for legally binding rules.

The difference between the two lies in the fact that the former desires to use non-binding normative space to achieve plural strategic goals, whereas the latter hopes to make binding rules only to restrain the behaviors of all parties to ensure international security. Especially on key issues such as military AI, demands of the strong and the weak countries vary more evidently. The former wish to gain a strategic advantage in security while advancing AI development, whereas the latter in lack of military strength prefer more to ban AI’s military application.

On the other hand, realization path of global AI governance mechanism complex has always been faced with differences between complicated logic of politics, market and technology, which run through every level of ethics, security and technological governance. Legitimacy of political intervention lies in the potential risks of AI security failure, which make it necessary for governments to strengthen supervision. Since AI is a capital-intensive and high-end technology, market and technology play a crucial role in the process of developing AI industry.

An ideal-type of global AI governance mechanism complex requires states, market, technology and other participating entities in AI governance to have a clarified boundary between each other and provide solutions to governance from their own perspectives. However, the boundary between states, market and technology is, in fact, blurred, where lie complex interest relations which makes it more difficult for the global AI governance system to form a unified mechanism complex.

Between state actors and non-state actors in AI governance, there are not only the blurred boundaries, but also differences on governance focus. Government departments in general pay more attention to high political fields related to AI, whereas market and technological participating entities pay more attention to governance on development process of technology and standards. Political perspectives, market and technology perspectives are not only different on focus of topic areas, but also have a latent complicated contest relationship. Governments pay more attention to military and security risks, lording it over market and technology.

From the perspective of market and technology, AI technology is very complicated with a very fast iteration speed, and as governments are more or less detached from technology, their interventions do not conform to the logic of technological development, and can only to a large extent hinder technological development. Companies and market should play a more important role in global AI governance.

At present, leading companies in AI field such as Microsoft, OpenAI, Anthropic, Alibaba, Tencent, and Baidu have respectively made initiatives on global AI governance. They believe that governance in technological development is the key to ensuring safe AI, emphasizing on transparency and responsible R&D of algorithms and models, and alignment of AI with mankind in value. Moreover, representatives of market and technology have formed an alliance to participate in global governance process.

 The proactive input of market and technology in global governance is to pursue a more relaxed regulatory environment in the process of technological development, and conduct self-supervision through voluntary commitment or standard setting. The initiative of market and technology is part of corporate responsibility, but objectively it also defuses government power. In general, with the increasing role and influence of technology and market in global AI governance, the contest among politics, market and technology will only become fiercer and fiercer.

Chinese Global AI Governance Solutions

By making the Global AI Governance Initiative in October 2023, China systematically elaborated to the world its positions, propositions and proposals on global AI governance. The Chinese solutions are China’s systematic response to current problems and tough-nut issues of global AI governance, and demonstrate China’s positive attitude and practical actions in promoting global AI development and governance cooperation.

First of all, Chinese governance solutions conformed to the current trend of AI technological development and timely responded to governance needs. From a perspective of the agenda content, the mainstream process of AI governance of the day mainly involves the two themes of safe development and peaceful use.

Likewise, China’s Global AI Governance Initiative grasps both themes, putting forward the principle of equal emphasis on development and security, in a bid to promote AI technologies to benefit humanity, further expounding on the two themes to the full from perspectives of technology ethics, technology for good, risk assessment, privacy, data security, algorithm management, and enabling development, and reiterating key issues of global AI governance. China’s positions also respond to the AI military governance process pertinent to the Lethal Autonomous Weapon System (LAWS) under the framework of the UN Convention on Specific Conventional Weapons.

On the basis of making full reference to various global initiatives and governance processes, Chinese solutions call for enhanced governance cooperation in the AI field. To be concrete, Chinese solutions, viewed from its core structure, make it necessary for all countries to commit to a vision of common, comprehensive, cooperative and sustainable security, to put equal emphasis on development and security, to build consensus through dialogue and cooperation, and to develop open, fair and efficient governance mechanisms.

From the perspective of governance entities, it is necessary for governments, international organizations, companies, research institutes, civil organizations, and individual citizens to uphold the principles of extensive consultation, joint contribution and shared benefits and work for coordinated progress.

In regard to governance means, it is necessary to advocate establishing AI ethical standards, establishing a testing and assessment system based on AI risk levels, establishing and improving laws and regulations, developing risk prevention technologies, and putting in place assistance and cooperation mechanisms for developing countries so as to promote the implementation of the AI Governance Initiative.

In regard to governance goals, it is hopeful that AI technology will further increase the well-being of humanity, and contribute to building a community with a shared future for mankind.

From the perspective of global trend, by reviewing Chinese solutions and all other bilateral and multilateral AI governance processes, one can find “mixed blessings” in the current AI governance. On the one hand, all kinds of processes come to a certain consensus in general principles and plans for AI governance. Basically, they all take the general principles of “technology for good”, “risk control” and “peaceful use” as the core of the agenda design, and in implementing concrete plans based on those principles, they recognize the necessity of enhancing cooperation, building consensus, and giving full play to the role of multiple stakeholders.

These macro principles and plans have established the criteria and direction of global AI governance. On the other hand, various mechanisms have found themselves in difficulty to get to details of refining principles and implementing plans. Most of the processes are still struggling to make breakthroughs in concrete agenda areas, let alone come up with a clear-defined and binding standard framework or guidelines for action.

Secondly, Chinese solutions have further improved the principles and concepts of global AI governance. Chinese solutions have not only followed common principles and practical strategies in global AI governance, but also revised and improved some lopsided and narrow-minded notions arising from the current global AI governance, pushing the global AI governance process towards a safer and fairer direction.

First, improving the human rights concept in AI ethics. In its Global AI Governance Initiative, China stresses that a people-centered approach in developing AI should be adopted. Though this principle is also commonplace in various governance processes, China has enriched the interpretation advocating that it is not only necessary for development of AI to respect and maintain individual rights, but even more so to increase the well-being of humanity and on the premise of ensuring social security and respecting the rights and interests of humanity, so that AI always develops in a way that is beneficial to human civilization.

 In this sense, the Chinese solutions are not only different from the narrow Western thinking of human rights in providing a framework for the moral norms and direction for developing AI technology, but also putting forward a profound ethical standard in advocating the construction of a new global AI governance model that balances the interests of all parties for jointly meeting the challenges of mankind.

The 2023 World AI Conference was held on July 8, 2023, in Shanghai, China. The picture is a large AI Model on display at the Huawei’s booth.

Second, the concept of equality in AI development is emphasized. In the face of developing countries in global AI governance generally losing their voice, China, as a major developing country, by giving full play to its influence, has brought the common concerns of developing countries into wider international discussion, put forward the principle of ensuring equal rights, equal opportunities and equal rules for all countries in AI development and governance, and advocated the need to carry out international cooperation and assistance for developing countries, continuously bridging the gap in AI and its governance capacity.

This will help solve the problem of marginalization of developing countries in global AI governance in terms of capability, willingness and platform openness, and build fair and reasonable international rules by giving equal prioritization to positions and voices of North and South countries.

Third, the vision of cooperation on AI governance is renewed. Chinese solutions make it clear that the development of AI should go beyond the “small circles” and the instrumentalization and privatization of global governance mechanisms by certain countries should be opposed. China proposes that all countries, big or small, strong or weak, and regardless of social systems have equal rights to develop and utilize AI, and stresses the need to establish a broader AI cooperation mechanism and jointly promote the process of AI governance.

Finally, Chinese solutions provide a comprehensive framework for coordinating the currently loose global governance mechanisms. At present, although the process of global AI governance is accelerating, the coupling of governance mechanisms is still less than sufficient in various functional fields, leading to obvious deficiencies in execution and coordination.

On the one hand, this is due to insufficient representation in various mechanisms, whose governance standards are not accepted by more countries. On the other hand, due to the limited coverage of the agenda, there is a weak response to the comprehensive issues in AI field such as militarization, system security and development cooperation.

China’s Global AI Governance Initiative is pinpointed to the above issues. The Chinese solutions include countries of different ideologies and at different stages of development in the same governance framework, with equal opportunities for all countries to participate and deliberate, emphasizing multilateralism and common interests.

In this way, the Chinese solutions seek to address the problem of under-representation of existing governance mechanisms and promote the widespread acceptance and application of global AI governance standards.

At the same time, the Chinese solutions have also expanded the scope of governance issues, putting AI security together with development, peace, and cooperation on the multiple themes for governance discussion, aiming to build an all-round and more comprehensive global AI governance system, and laying a solid foundation for effective implementation and future development of global AI governance.

As such, China’s AI governance solutions have broad practical prospects. China can work with the international community to take the opportunities such as the UN Summit of the Future to promote the recognition of key propositions in the Global AI Governance Initiative at the UN arena.

China can also actively strengthen multi-level cooperation in other governance mechanisms, and further strengthen synergy in international technical exchanges and policy dovetailing through principle consultation and resource pooling.

In addition, China can also build more concrete and practical governance mechanisms centered on the Global AI Governance Initiative, and search for a reasonable balance in multiple dimensions such as development and security, reform and stability so as to provide better solutions to the issues of representation and extensiveness in current global AI governance.

 

 

By amine

اترك تعليقاً

لن يتم نشر عنوان بريدك الإلكتروني. الحقول الإلزامية مشار إليها بـ *