Seemingly neutral, artificial intelligence is increasingly embedded in the fabric of our lives. But its design and deployment are not only profoundly political, they also deepen global power imbalances.
A man orders coffee with the help of an artificial intelligence based system at AI Expo in Cape Town, South Africa. Design and development of AI systems is concentrated in large companies mostly from the global North, while the technology itself is often deployed in the global South.
Much has been written about the ways in which artificial intelligence (AI) systems have a part to play in our societies, today and in the future. Given access to huge amounts of data, affordable computational power, and investment in the technology, AI systems can produce decisions, predictions and classifications across a range of sectors. This profoundly affects economic development (positively and negatively), social justice and the exercise of human rights.
Contrary to popular belief that AI is neutral, infallible and efficient, it is a socio-technical system with significant limitations and can be flawed. One possible explanation is that the data used to train these systems emerges from a world that is discriminatory and unfair, and so what the algorithm learns as ground truth is problematic to begin with. Another explanation is that the humans building these systems have their unique biases and train systems in a way that is flawed. Another possible explanation is that there is no true understanding of why and how some systems are flawed – some algorithms are inherently inscrutable and opaque, and/or operate onspurious correlations that make no sense to an observer.
But there is a fourth cross-cutting explanation that concerns the global power relations in which these systems are built. AI systems, and the deliberations surrounding AI, are flawed because they amplify some voices at the expense of others and are built by a few people and imposed on others. In other words, the design, development, deployment and deliberation around AI systems are profoundly political.
The need to address the imbalance in the global narrative
Over 60 years after the term was officially coined, AI is firmly embedded in the fabric of our public and private lives in a variety of ways: from deciding our creditworthiness, to flagging problematic content online, from diagnosis in health care, toassisting law enforcement with the maintenance of law and order.
AI systems today use statistical methods to learn from data, and are used primarily for prediction, classification, and identification of patterns. The speed and scale at which these systems function far exceed human capability, and this has captured the imagination of governments, companies, academia and civil society.
The impact of AI on rights, democracy, development and justice is both significant (widespread and general) and bespoke (impacting on individuals in unique ways), depending on the context in which AI systems are deployed, and the purposes for which they are built.
Popular narratives around AI systems have been notoriously lacking in nuance. Also, global deliberations are lacking in "global" perspectives. Thought leadership, evidence and deliberation are often concentrated in jurisdictions like the United States, United Kingdom and Europe. The politics of this goes far beyond just regulation and policy – it impacts how we understand, critique, and also build AI systems. The underlying assumptions that guide the design, development and deployment of these systems are context specific, yet globally applied in one direction, from the "global North" towards the "global South".
Complexity of governance frameworks and form
Given the increasingly consequential impact that AI has in societies across the world, there has been a significant push towards articulating the ways in which these systems will be governed, with various frameworks of reference coming to the fore. The extent to which existing regulations in national, regional and international contexts apply to these technologies is unclear, although a closer analysis of data protection regulation, discrimination law and labor law is necessary. There has been a significant push towards critiquing and regulating these systems on the basis of international human rights standards.
Given the impact on privacy, freedom of expression and freedom of assembly, among others, the human rights framework is a minimum requirement to which AI systems must adhere. This can be done by conducting thorough human rights impact assessments of systems prior to deployment, including assessing the legality of these systems against human rights standards, and by industry affirming commitment to the United Nations Guiding Principles on Business and Human Rights.
Social justice is another dominant lens through which AI systems are understood and critiqued. While human rights provide an important minimum requirement for AI systems to adhere to, an ongoing critique of human rights is that they are "focused on securing enough for everyone, are essential – but they are not enough." Social justice advocates are concerned that people are treated in ways consistent with ideals of fairness, accountability, transparency, inclusion, and are free from bias and discrimination.
A third strand emerges from a development perspective, to have the United Nations' (UN) Sustainable Development Goals (SDGs) guide responsible AI deployment (and in turn use AI to achieve the SDGs), and to leverage AI for economic growth, particularly in countries where technological progress is synonymous with economic progress.
The form these various governance frameworks take also varies. Multiple UN mechanisms are currently studying the implications of AI from a human rights and development perspective, including but not limited to the High-level Panel on Digital Cooperation, the Human Rights Council, UNESCO's World Commission on the Ethics of Scientific Knowledge and Technology, and also the International Telecommunication Union's AI for Good Summit. Regional bodies like the European Union High-Level Expert Group on Artificial Intelligence also focus on questions of human rights and principles of social justice like fairness, accountability, bias and exclusion. International private sector bodies like the Partnership on AI and the Institute of Electrical and Electronics Engineers (IEEE) also invoke principles of human rights, social justice and development. All of these offer frameworks that can guide the design, development and deployment of AI by governments, and for companies building AI systems.
Complexity of politics: Power and process
Much like the models and frameworks of governance that surround AI systems, the process of building AI systems is inherently political. The problem that an algorithm should solve, the data that an algorithm is exposed to, the training that an algorithm goes through, who gets to design and oversee the algorithm's training, the context within which an algorithmic system is built, the context within which an algorithm is deployed, and the ways in which the algorithmic system's findings are applied in imperfect and unequal societies are all political decisions taken by humans.
It is not an overstatement to say that AI fundamentally reorients the power dynamics between individuals, societies, institutions and governments. It is imperative to be conscientious of the inherent limitations of AI systems, and their imperfect and often harmful overlap with textured and imperfect societies and economies. AI systems are primarily developed by private companies which train and analyze data on the basis of assumptions that are not always legal or ethical, profoundly impacting rights such as privacy and freedom of expression. This essentially makes private entities arbiters of constitutional rights and public functions in the absence of appropriate accountability mechanisms. The design and development of AI systems is also concentrated in large companies (mostly from the United States and increasingly from China). However, deployment of technology is often imposed on jurisdictions in the global South, either on the pretext of pilot projects, or economic development and progress.
Conclusion: Risk and responsability
Current conversations around AI are overwhelmingly dominated by a multiplicity of efforts and initiatives in developed countries, each coming through with a set of incentives, assumptions and goals in mind. While governance systems and safeguards are built in these jurisdictions, ubiquitous deployment and experimentation occur in others who are not part of the conversation. Yet the social realities and cultural setting in which systems are designed and developed differ significantly from the societies in which they are deployed. Given wide disparity in legal protections, societal values, institutional mechanisms and infrastructural access, this is unacceptable at best and dangerous at worst.
It is incumbent on researchers, policy makers, industry and civil society to engage with the complexities of the global South. Failing this, we risk creating a space that looks very much like the opaque, inscrutable, discriminatory and exclusive systems we aim to improve in our daily work.
This article is the shortened version of a text by Vidushi Marda published in the Global Information Society Watch report 2019 "Global Information Society Watch 2019: Artificial intelligence: Human rights, social justice and development" by the Association for Progressive Communications (APC). Marda, a legal researcher interested in the interplay between emerging technologies, policy, and society, currently works as Programme Officer with ARTICLE 19's Team Digital, where her primary focus is on the ethical, legal, and regulatory issues that arise from algorithmic decision making.