-
The_democratization_of_global_AI.pdf
- Week7.docx
- DecisionAI.pdf
- Usability_and_responsivenessAI.pdf
- Can_Building_Artificially_IntAI.pdf
- ChatGPT_for_Educational_Purposes_Investigating_the_Impact_of_Knowledge_Management_Factors_on_Student_Satisfaction_and_Continuous_UsageAI.pdf
nature machine intelligence Volume 6 | March 2024 | 246–248 | 246
https://doi.org/10.1038/s42256-024-00811-z
Comment
The democratization of global AI governance and the role of tech companies
Eva Erman & Markus Furendal
Can non-state multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that although they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority to democratize global AI governance.
After a period of intense fascination with artificial intelligence (AI) applications, including large language models (LLMs) such as ChatGPT, the public discussion is quickly turning toward the issue of the social, political and ethical effects of these technologies. Several regulation and governance initiatives are under way at national and regional levels. However, because cutting-edge AI development often takes place in multinational companies or international research labs, and AI tech- nology creates cross-border externalities, an additional level of trans- boundary regulation and cooperation is needed to solve problems or provide goods associated with AI technologies. The ‘global governance of AI’ can be said to refer to the rules, processes and decision procedures established by governments, international and intergovernmental organizations, non-state and private actors to regulate the develop- ment and deployment of those systems1,2. It includes soft regulations such as internal ethics guidelines in multinational AI-developing companies like Microsoft, and the ‘Bletchley declaration’ signed by 28 countries and the European Union (EU) in 2023. It can also take the form of hard regulations, such as the AI Act currently under negotiation in the EU. Unlike the national level, regulatory efforts at the global level typically lack a clearly defined central institution or hierarchy. This means that global AI governance initiatives are partly overlapping and not always aligned, and are best described by the concept of ‘regime complex’ from international relations theory3.
Despite the rapid pace at which this regime complex is develop- ing, little attention has been paid to how democratic the processes by which it takes shape are. It has become a trope for AI-developing companies to speak of a need to ‘democratize’ AI, but this often means simply that AI technology should be made more accessible4. Moreover, when the democratization of AI governance is discussed, a common approach is to evaluate proposals by whether they can successfully prevent ‘bad’ outcomes such as AI bias or existential risks, or make ‘good’ outcomes such as increased economic productivity more likely5. Similarly, the public discussion about AI regulation tends to focus on the pros, cons and viability of concrete proposals, such as whether AI development should be put on hold, to enable research into the effects of AI to catch up6, or whether we should create a new global institution akin to the International Atomic Energy Agency7. The problem with
such an outcome-focused understanding, however, is that it reduces AI governance to a challenge of executing an agenda that is already set, thereby overlooking who has influence over the agenda and in what ways. It thus directs our attention towards the effects of governance mechanisms rather than the societal goals pursued and the means by which to achieve them. However, this fails to take the normative ideal of democracy seriously. In broad strokes, political theorists have sug- gested that governance arrangements are democratic to the extent that those affected by the decisions have a direct or indirect say in the decision-making on equal terms8. For AI governance to be democratic, this entails that those who are affected by AI technology should identify and decide upon the goals of AI governance collectively.
This ideal seems unattainable at the global level. Global institu- tions are typically said to suffer from a democratic deficit, as it is dif- ficult for citizens to stay informed and exercise influence over political processes far removed from their daily lives. In response to this chal- lenge, earlier debates have often gravitated toward the democratic potential of non-state actors such as non-governmental organizations (NGOs), advocacy groups and social movements. The hope is that such civil society actors could represent the interests of citizens and make sure they appear in the decision processes of international organiza- tions and institutions, and that they could function as watchdogs, hold- ing those who wield power accountable. It is suggested that, ideally, civil society actors can thereby help democratize global governance, by promoting more inclusion, representation and transparency9,10. This makes it appropriate to ask whether non-state actors may help to democratize the global governance of AI.
We suggest that one should not be too optimistic. This is primarily because in global AI governance, one kind of non-state actors is far more prominent than the civil society actors usually entrusted to help offset democratic deficits: AI-developing tech companies. The machine learn- ing approach to AI is, by now, so resource-intensive that breakthroughs are restricted to multinational corporations, such as Microsoft and its affiliate OpenAI, or start-ups funded by wealthy individuals, such as Anthropic. At the same time, a narrative is being cultivated about AI technology as inherently complex and difficult to understand, and of politics as so cumbersome that ill-informed elected officials are likely to cause more harm than good if they try to regulate AI on their own. The CEO of OpenAI, for instance, has not only provided policy advice at a US Senate subcommittee hearing, but also privately met with US lawmakers as well as the leaders of several European countries11.
Listening to technological experts might be necessary in the process of finding suitable regulation for a rapidly moving policy area. At best, involving non-state actors could help to produce outcomes that are preferable, albeit not more democratic, than the alternative. At worst, it enables regulatory capture by companies that can hardly be said to represent the voices of ordinary people, let alone of marginalized groups12. It is thus crucial to ask what role tech companies may have in the democratization of global AI governance, one of our age’s most
Check for updates
nature machine intelligence Volume 6 | March 2024 | 246–248 | 247
Comment
between (democratic) states in international organizations, non-state actors with moral (and possibly those with epistemic) authority may become democratic agents in a similar way in the future.
Although the prospects for tech companies with market author- ity becoming democratic agents remain bleak, we argue that they may still become ‘agents of democracy’— that is, agents that strengthen the core values of democracy in their decision-making, such as the values of accountability, transparency, inclusion and deliberation. Non-state actors with moral authority, such as AlgorithmWatch, may be said to already be agents of democracy by providing input on ethical and human rights related considerations, and efforts to ensure that algorithmic decision-making is used in ways that are consistent with democratic values and principles. By contrast, market authorities operate within the structure and logic of the market, and may thus appear less likely to improve their democratic credentials in ways that scholars have hoped for. However, one should not disregard the fact that multinational com- panies often publicly support democratic values such as transparency, accountability and respect for human rights, in their own AI development and in their interactions with lawmakers. The most charitable interpreta- tion of the role of tech companies in the global governance of AI hence says that they could, in principle, act as agents of democracy1,5, as long as their commitments to democratic values are not simply an attempt to gain support and trust from consumers through ‘ethics washing’15. Importantly, however, although they may strengthen the empirical pre- requisites for future democratization of global AI governance by acting as agents of democracy, this does not take them closer to becoming democratic agents themselves and as such contribute to democratization.
Conclusion AI is often perceived to be both a great threat and a promise to democ- racy. Pessimists worry about the way LLMs and other forms of AI can undermine communication and trust. Optimists point to how AI ena- bles technological innovations in voting procedures, or enables more voices to be heard in the democratic process, although rarely exploring this in light of existing democratic theories. In this Comment, we have suggested that the public debate should also recognise the distinct and additional point that the governance of AI should be as democratic as possible. Given the effect that AI technology already has on societies, it is crucial that there are democratically legitimate channels for the people affected by AI to have a say about how it is being developed and deployed. Specifically, we have argued that, although some non-state actors — most probably those with moral or epistemic authority — may become democratic agents in the future, and as such contribute to the democratization of global AI governance, most of them — in particular those with market authority — are more likely to increase their demo- cratic credentials as agents of democracy, improving the empirical prerequisites for future democratization.
Eva Erman 1,2 & Markus Furendal 1,2
1Department of Political Science, Stockholm University, Stockholm, Sweden. 2These authors contributed equally: Eva Erman, Markus Furendal.
e-mail: eva.erman@statsvet.su.se
Published online: 8 March 2024
References 1. Erman, E. & Furendal, M. Moral Philos. Politics 9, 2 (2022). 2. Zürn, M. In The Oxford Handbook of Governance (ed. Levi-Faur, D.)
(Oxford Univ. Press, 2012).
important policy areas. In this Comment, we make two claims. First, the democratic potential of non-state actors depends on whether they wield epistemic, market or moral authority in global AI politics. Sec- ond, although including non-state actors could improve the prospects for future democratization of AI, the prospects for them becoming democratic agents of the kind that contribute to the democratization of global AI governance appear bleak. These insights should inform the discussion around the democratic challenges facing the global governance of AI.
Forms of authority and the democratic role of tech companies AI-developing companies have come to exercise considerable influ- ence over the emerging global governance of AI in two overlapping ways13. First, they possess epistemic authority, rooted in their position as trustworthy judges of what constitutes knowledge and acceptable evidence in the AI domain. In this role, they not only develop AI but also inform the production and dissemination of knowledge, which in turn could shape public opinion and policy decisions. OpenAI and DeepMind, for example, partly function as research institutes, probing the limits of the capabilities of AI technologies in research papers that inform regulatory efforts. Second, they wield market authority and exert influence over economic and political decisions. For example, Microsoft arguably has notable power over AI development and regu- lation since its recent acquisition of the coding platform GitHub and investments in the AI-developing company OpenAI, which in turn has lobbied lawmakers in the negotiations on the EU AI Act14. Given that the category of non-state actors includes companies such as these, simply granting access to non-state actors is not guaranteed to democratize the global governance of AI.
It is clear that tech companies cannot be considered to be ‘demo- cratic agents’, by which we mean agents that decide on policies and laws ‘on behalf’ of others, as no one has in fact authorised them to do so. In global governance, this occurs either by direct authorization through a democratic procedure (such as when EU citizens elect the EU parlia- ment) or by indirect authorization through delegation by a directly authorised body (such as when member state governments appoint members in the EU’s European Commission). When tech companies are invited into regulatory processes, however, they do not repre- sent anyone but themselves. And even if they attempted to transmit citizens’ concerns into the decision process of international organiza- tions, that would not be enough to make them democratic agents, as their authority has not arisen from the rightful source. This political– theoretical analysis suggests that, although tech companies can be said to contribute to improved global governance in many ways, by fulfilling different kinds of rightful ends, they cannot reduce the democratic deficit in global AI governance.
That said, indirect authorization has taken place in more estab- lished policy areas in global governance, in which non-state actors have become democratic agents through delegation of authority, having crucial roles in several phases of the policy cycle of international organi- zations. The International Committee of the Red Cross, for example, has mandate under the Geneva Convention to monitor the implementation of international humanitarian law. What is important to notice in these cases, however, is that authority is typically delegated not to companies but instead to NGOs and other civil society actors, who exert a kind of moral authority derived from the fact that their mission is to promote what are generally seen as morally desirable goals. Arguably, with more international legislation in the AI domain, and deepened collaboration
nature machine intelligence Volume 6 | March 2024 | 246–248 | 248
Comment
3. Tallberg, J., Erman, E., Furendal, M., Geith, J., Klamberg, M. & Lundgren, M. Int. Stud. Rev. 25, 3 (2023).
4. Seger, E., Ovadya, A., Siddarth, D., Garfinkel, B., Dafoe, A. In Proc. 2023 AAAI/ACM Conference on AI, Ethics, and Society https://doi.org/10.1145/3600211.3604693 (2023).
5. Erman, E. & Furendal, M. Polit. Stud. https://doi.org/10.1177/00323217221126665 (2022). 6. Future of Life Institute. https://go.nature.com/44Apd9V (accessed 13 June 2023). 7. Altman, S., Brockman, G. & Sutskever, I. https://go.nature.com/3I867xV (accessed 22 May
2023). 8. Valentini, L. Perspect. Politics 12, 4 (2014). 9. Nanz, P., Kilssing, C. & Steffek, J. (eds.). Civil Society Participation in European and Global
Governance (Palgrave Macmillan, 2008). 10. Dryzek, J. & Tanasoca, A. Democratizing Global Justice: Deliberating Global Goals
(Cambridge Univ. Press, 2021).
11. Kang, C. The New York Times https://go.nature.com/3uLG3W5 (7 June 2023). 12. Free Press. https://go.nature.com/42XjCtJ (8 May 2023). 13. Hall, R.B. Harv. Int. Rev. 27, 2 (2005). 14. Perrigo, B. Time https://go.nature.com/3Tax0qY (20 June 2023). 15. Bietti, E. In FAT* ’20: Proc. 2020 Conference on Fairness, Accountability, and Transparency
210–219 https://doi.org/10.1145/3351095.3372860 (2020).
Competing interests The authors declare no competing interests.
Additional information Peer review information : Nature Machine Intelligence thanks the anonymous reviewer(s) for their contribution to the peer review of this work.
Reproduced with permission of copyright owner. Further reproduction prohibited without permission.
- The democratization of global AI governance and the role of tech companies
- Forms of authority and the democratic role of tech companies
- Conclusion
