Business Finance – Management Business Finance – Management ASSIGNMENT (APA, NO PLAGARISM, GREAT WORK, ON TIME) here
The_democratization_of_global_AI.pdf Week7.docx DecisionAI.pdf Usability_and_responsivenessAI.pdf Can_Building_Artificially_IntAI.pdf ChatGPT_for_Educational_Purposes_Investigating_the_Impact_of_Knowledge_Management_Factors_on_Student_Satisfaction_and_Continuous_UsageAI.pdf nature machine intelligence Volume 6 | March 2024 | 246–248 | 246 https://doi.org/10.1038/s42256-024-00811-z Comment The democratization of global AI governance and the role of tech companies Eva Erman & Markus Furendal Can non-state multinational tech companies counteract the potential democratic deficit in the emerging global governance of AI? We argue that although they may strengthen core values of democracy such as accountability and transparency, they currently lack the right kind of authority to democratize global AI governance. After a period of intense fascination with artificial intelligence (AI) applications, including large language models (LLMs) such as ChatGPT, the public discussion is quickly turning toward the issue of the social, political and ethical effects of these technologies. Several regulation and governance initiatives are under way at national and regional levels. However, because cutting-edge AI development often takes place in multinational companies or international research labs, and AI tech- nology creates cross-border externalities, an additional level of trans- boundary regulation and cooperation is needed to solve problems or provide goods associated with AI technologies. The ‘global governance of AI’ can be said to refer to the rules, processes and decision procedures established by governments, international and intergovernmental organizations, non-state and private actors to regulate the develop- ment and deployment of those systems1,2. It includes soft regulations such as internal ethics guidelines in multinational AI-developing companies like Microsoft, and the ‘Bletchley declaration’ signed by 28 countries and the European Union (EU) in 2023. It can also take the form of hard regulations, such as the AI Act currently under negotiation in the EU. Unlike the national level, regulatory efforts at the global level typically lack a clearly defined central institution or hierarchy. This means that global AI governance initiatives are partly overlapping and not always aligned, and are best described by the concept of ‘regime complex’ from international relations theory3. Despite the rapid pace at which this regime complex is develop- ing, little attention has been paid to how democratic the processes by which it takes shape are. It has become a trope for AI-developing companies to speak of a need to ‘democratize’ AI, but this often means simply that AI technology should be made more accessible4. Moreover, when the democratization of AI governance is discussed, a common approach is to evaluate proposals by whether they can successfully prevent ‘bad’ outcomes such as AI bias or existential risks, or make ‘good’ outcomes such as increased economic productivity more likely5. Similarly, the public discussion about AI regulation tends to focus on the pros, cons and viability of concrete proposals, such as whether AI development should be put on hold, to enable research into the effects of AI to catch up6, or whether we should create a new global institution akin to the International Atomic Energy Agency7. The problem with such an outcome-focused understanding, however, is that it reduces AI governance to a challenge of executing an agenda that is already set, thereby overlooking who has influence over the Read More …
