The last years have seen massive public and private research funding in “Ethics and AI” as a very quick response to the first indications that AI can have biases and be discriminatory. The article in Propublica on Machine Bias for example became a staple reference on the conference ciruits. Another early reference point was the racial discrimination of facial recognition technologies due to majority white training data. This article in The Atlantic became influencial in spreading that view.
A whole career path for “AI ethicists” both in academia and in consulting was founded on this basis. Lately, there been a critique of all this public and private investments in ethics and AI being a kind of “ethics washing”, a way of avoiding state regulation through industry pushes for self-regulation, refering to their own sufficient ethics initiatives. By the industry showing that they can be responsible, they keep the state away. This article on the false promise of ethical AI has a good review of the debate.
The ethical approach to AI has also been promoted heavily by the EU as a solution to their “double pinch” of both being concerned about privacy and data protection inherited from the GDPR debates and at the same time having the ambition of becoming world leader in AI, although – with the help of Ethical AI – based on “European Values” as in contrast to American capitalism and Chinese authoritarianism. In other words, ethics washing. Thomas Metzinger – who was part the export group working on the EU “Ethics Guidelines for Trustworthy AI” – explains this problem in Tagesspiegel.
A group of researchers from the Berkman Klein Center (Fjeld et al. 2020) mapped the principled approach to AI in the first wave of AI ethics and found that they focused on a set of themes such as:
- safety and security
- transparency and explainability
- fairness and non-discrimination
- human control of technology
- professional responsibility
- promotion of human values.
All of these tropes are about fine-tuning an already determined introduction of a certain system in order to prevent its worst outcomes. They are certaintly devoid of political content. The one exception could be “human control of technology”, although I suspect that this has to do with the user of the system and not any broader sense of democratic control.
The Second Wave of AI Ethics
Today there is talk about a second wave of AI ethics that learns from the frustrations with how the first wave ended up in so much ethics washing. Moving on from the perhaps necessary but limited scope of improving of current AI solutions and preventing disasters, the new wave moves to thinking more broadly about the process and politics of evolving more democratic, just and beneficial technologies. More inspiration comes from the historical work done on responsible research and innovation (RRI), critical technology assessments (CTA), Science and Technology Studies, Civictech, Citizen Science, and other approaches that take a more embedded approach to technologies, their social situatedness and the publics formed around them, than the somewhat distant ethical principles approach taken by the first generation of AI ethicists.
The second wave instead asks questions like:
- Which systems really deserve to be built?
- Which problems most need to be tackled?
- Who is best placed to build them?
- And who decides?
Two examples of difference between the first and the second wave is taken up in the article on the false promise of ethical AI:
The difference can be seen in the approach to facial recognition. A first wave apporach shows the bias in facial recognition and tries to make them inclusive. The second wave asks when if anytime at all, facial recognition is a socially productive technology? See for example IBM’s decision to no longer offer, develop, or research facial recognition technology.
Another example is apps for mental health. Is the only problem issues of privacy and if they work equally well with everyone, or is the problem more the risk of them becoming cheap replacements for genuine social investments in mental health care?
Perhaps we can talk about a transition between 3 ways that social theory relates to or is expected to relate to technology development, for example in interdiscipinary resarch projects or in corporate R&D divisions:
- Bring an understanding of human behavior and user studies in relation to technology development
- Make sure technology solution respect privacy and individual rights
- Challenge the direction and purpose of technology development.
How Fast Can the Ship Turn Around?
It’s great that this new wave of AI ethics asks more politically relevant questions to technology that goes both beyond the narrow focus on AI and the limiting framework of ethics principles. Still, a lot of funding for AI ethicists have already been tied up in long running private and public research investments coming from the first wave. The question now will be to see if they are going to be a complement to each other – the first as corrective critique of industry and government AI initiatives and the second as agenda-setting critical reformulations of the whole AI paradigm – or if we are going to begin to see clashes between the perspectives and old friends turned enemies.
The Long Waves of Net Politics
Let me put this in a longer perspective of net politics of net critique, starting in the beginning of the millenia around the time I personally got involved in these questions.
My verdict of the first decade of the new millenia – what I would call “the visionary decade” – would be somewhere in between the idea of the internet as a revolutionary medium – let’s call this the naive position – and the internet as a cybernetic control apparatus – let’s call this the cynical position. There was an opening – by historical necessity and the chance of flux – that allowed for both a sharp critique and a radical vision, both abstract thought and concrete practice to emerge. What came out of it can be discussed but in some sense of less importance that the fact that it happened.
The decade after I would call “the critical decade”. As the internet and the surrounding physical world came to be dominated by an oligarchy of platform capitalists, technology critique grew exponentially together with the growth of the platforms. It was a critique that was professisonal, sharp, well researched and well written, but that lacked any visionary political action other than trying to pull various emergency breaks. It was net politics as trolley problem.
Maybe the new decade we forcefully entered now can open up for asking these new broader, political, transformative socio-technical questions that he second wave of AI critique point towards. Any ideas of living with and after corona will certainly have to become part of that.
Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. 2020. “Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI.” SSRN Scholarly Paper ID 3518482. Rochester, NY: Social Science Research Network.