A Penny for your Thoughts: The Crucial Role of Collaboration to Enlighten the Artificial Intelligence Black Box
30 Nov 2020 | Published by Lukas Kögel, Camilla Wanckel & Andree Pruin
Artificial intelligence (AI) holds great promises to relieve the work of public servants and eliminate biases in human behaviour. Governments around the world strive to exploit AI and Big Data in order to design more evidence-based, effective and efficient policies, improve the communication with citizens through AI-guided chatbots, or increase the speed and quality of public services. At the same time, however, discriminative decision-making is likely to be continued in the form of discriminative algorithms wherefore the European Commission pointed to discriminatory governance structures as a major risk of the use of AI in the public sector. For example, the prominent case of Amazon’s recognition software repeatedly failed to identify black women, assigning them as men or not recognising them at all. Yet, the software was purchased by state and law enforcement agencies in the US, putting equal treatment of citizens by government authorities at stake. Similarly, AI may discriminate against with regard to gender, sexual orientation, age, religion or disabilities.
Discriminatory and incomprehensible algorithms negatively affect citizens as they can lead to arbitrary and unequal treatment or result in policies that do not adequately address their needs. This could lead to declining public trust, influencing the legitimacy of service delivery processes and the rejection of AI-based decision-making. So how can the public sector organisations use the potential of AI without weakening society’s trust in its policies?
We argue that increased collaboration between governmental and non-governmental actors is a crucial component of establishing equitable, transparent and non-discriminatory AI standards and ensure that ethical, technological, social and political implications are equally considered. The following examples show how collaboration of governmental and non-governmental actors help build networks and strengthen broad participation:
• In civil society, initiatives such as Queer in AI, Black in AI, Latinx in AI or Women in Machine Learning are advocating the representation of their social group in AI. These initiatives function as a platform for collaboration and exchange by sharing ideas at academic AI research conferences such as the International Conference on Machine Learning. Their calls for more diversity and representation in AI are supported by the Toronto Declaration, in which Amnesty International and the digital rights group AccessNow urge governments and businesses to protect human rights and the right of equality by advocating oversight mechanisms and accountability.
• The European Commission has initiated a High-Level Expert Group on AI (AIHLEG) consisting of 52 experts from different disciplines developing guidelines for a human-centric approach of AI and serve as a steering group for the European AI Alliance, a multi-stakeholder forum for AI matters in the EU. Diversity, non-discrimination and fairness are identified as key requirements for trustworthy AI in the Ethics Guidelines for Trustworthy AI of the EU. To ensure these values, the AIHLED suggests flagging mechanisms and broad stakeholder participation. The latter purpose is also served by the OECD AI Policy Observatory, which acts as a platform for governments and various stakeholders concerning the use of AI in public policy.
• The research project BIAS at Leibniz University Hanover is uniting scientists with philosophical, legal and technical backgrounds to provide solutions for a non-discriminatory decision-making process of AI. The interdisciplinary collaboration aims to analyse the risks of biases in algorithms and find approaches to cope with them.
The collaboration between governmental and non-governmental actors, as well as interdisciplinary collaboration, shows the rising awareness towards discrimination in AI, fosters diversity and enables an urgently needed broad social discourse on discrimination in AI.
Derived from the previous examples, we therefore propose four collaboration components that public sector organisations should strive for:
• First, governments must be aware of the diversity gap, meaning the lack of diversity within the data science community. In order to reflect on citizens’ needs and consider viewpoints across all social groups, public sector organisations seeking to create fair AI solutions need to hire engineering teams that consist of both men and women, different ethnicities, etc.
• Second, public organisations should appraise interdisciplinary project teams. Depending on the policy field and objective of AI applications, computer scientists and mathematicians should be joined by sociologists, philosophers, economists, legal scientists, etc. to enable a holistic perspective on the use of AI and its impact on society.
• Third, machine learning models and other forms of AI rely on the training data they are fed with. These data may reflect structural discrimination of social groups that must be taken into account when choosing data on which AI is trained to prevent discriminatory biases entering algorithms.
• Fourth, governmental actors and non-governmental actors must join forces to develop and enforce guidelines, traceability, and transparency for a non-discriminatory use of AI.
It will be crucial for governments to turn from a rather reactive approach in regulating and exploiting AI to a proactive and leading role. This applies not only to the design of legal frameworks but also to diversity aspects in the composition of the teams that shape government action on AI. Strong and diverse skills on AI in governmental organisations can potentially lead to a less pronounced dependence on sometimes poorly diversified teams in the private sector.
The reliability and unbiasedness of AI in the public sector will be crucial for citizens confidence in the technology. Preventing and consequently avoiding unjustified negative effects for individuals stemming from biased algorithms, such as racial profiling or unwarranted cuts of social transfers, will likely increase public trust and higher the chances of a successful application of AI in the public sector.
As the previous examples show, collaboration enables social groups to have their seat at the table to build mutual trust for the use of AI in the public sector. In that sense, the creation of collaborative structures can help governments to exploit the potential of AI and develop inclusive solutions that serve all citizens while minimising and preventing unintended and undesired effects.