The last few years have seen a number of calls for 'transparency' around the workings of the latest artificial intelligence systems.

In 2019, the European Commission's High-Level Expert Group on AI published a report, Guidelines for Trustworthy AI (bit.ly/2O184Am), that proposes transparency as a central ethical and legal requirement, while the EU's General Data Protection Regulation (GDPR) guidelines stipulate that when a person is subject to an automated decision, they have the right to an explanation of the judgment, as well as the right to challenge it: no more hiding behind 'computer says no'. Why is transparency suddenly a concern, though? Is it achievable? And if not, what alternatives are available?
Machine learning
It's useful to contrast contemporary machine learning systems with classical programming and what the philosopher John Haugeland called 'good old-fashioned AI'. In the latter, humans write the programs and provide the input data, and the computer applies the programs to the data to come up with the output (whether that's a chess move, the next line of a conversation, or a medical diagnosis). Machine learning turns this on its head: humans still provide the data (which may or may not be labelled or otherwise curated) but the computer itself comes up with a set of rules that describe patterns and correlations within that data set (often with the goal of using those rules to make predictions or recommendations about future data points). The 'learning' takes place because the algorithm can gradually modify its own rules as more data is acquired; the result is that human developers can quickly lose track of how the system actually works.
It just so happens that this revolution in machine learning has coincided with the rise of 'big data'. Not only is hardware now fast and powerful enough to process vast quantities of information, but humans are also increasingly willing to provide it for free. Hundreds of millions of tweets, likes, shares and search queries are generated daily, all containing details about users, their connections and their preferences; these interactions provide a rich training set for machine learning systems. It's no accident that we now have a cottage industry of AI experts and tech insiders giving TED talks and writing op-eds alluding to alchemy, sorcery and other arcane secrets. Arthur C Clarke famously quipped that "any sufficiently advanced technology is indistinguishable from magic", and it certainly looks that way when the speed and power of modern AI systems far outstrips that of human cognition.
Calls for transparency aren't just about understanding, though. They acquire an ethical dimension because AI systems are increasingly used to make decisions and recommendations in socially significant and morally weighty contexts, such as whether you qualify for a loan - or for parole. And yet, during the past few years, we have seen a steady stream of occurrences in which, because machine learning systems are trained on data that literally encodes the various biases and prejudices of society at large, the outputs of such AI systems have automated and reinforced these problems. These include accusations of sexism and gender bias in Google Translate and Amazon's automated CV-scanning recruitment tool, and the allegation that the COMPAS system - routinely used in several US states and designed to predict the risk of criminal recidivism - is inaccurate in a way that is systematically racist.
What does transparency look like?
In response to problems such as these, calls for transparency have taken a number of forms. Some concern the relationship between the AI system and its users, developers and the media. For example, European Commission guidelines recommend that:
- We should be more upfront about the motivations for developing AI systems, their intended purposes and the rationale for deploying them in various contexts - asking ourselves 'Why is an AI system being used in this situation?'
- We should be honest about what an AI system can actually do, and avoid the hype and over-zealous marketing that surrounds the announcement of a new breakthrough and its coverage in the media.
- As AI systems get better at mimicking humans, and companies use them to automate frontline user-facing services, users should have the right to be informed that they are interacting with an AI system rather than a human (see Figure 1, depicting a recent interaction with Aer Lingus, in which the conversation does become transparent in this sense eventually).
These kinds of transparency are easy to achieve with good old-fashioned honesty about the development, use and deployment of AI systems. Transparency about the 'inner workings' of an AI system is much more challenging. As mentioned, GDPR regulations stipulate that people have the right to an explanation of automated judgments made about them, and the EC guidelines recommend that particular AI decisions and general underlying algorithms should be transparent and traceable for the purposes of audit. However, because of the automatic nature of machine learning systems and their sheer complexity, this is not always possible. What alternatives are there?
One option is 'contestability' - even if we don't know exactly why an AI system made a particular decision or recommendation, we should have the right to challenge it if it is erroneous, unfair or unjust. Machine learning does have the technical capacity to include user feedback to improve performance (you may have seen this with Spotify's 'radio' function, which learns your preferences faster when you tell it what you don't like). So-called 'reinforcement learning' - a bit like training an animal with a carrot and/or stick - allows the program to use contested decisions, or reported errors, to make minor self-modifications in order to tweak or improve its performance. In order to build AI that's trustworthy even in situations where we can't fully understand exactly how a program works, we will need legal and procedural regulation so that users have something like a right of reply, or the due-process of cross examination for (AI) 'witnesses'.
Dr Joel Walmsley is a philosopher at University College Cork, Ireland.