Joel Walmsley writes about the past, present and future of artificial intelligence from a philosophical point of view

The philosopher Fred Dretske once wrote: “If you can’t build one, you don’t know how it works.” For most of its history, artificial intelligence (AI) followed this maxim by applying it to questions about the mind: you can’t truly understand how the mind works, the thought goes, without having some idea of how to construct a machine that actually has one. As a result,
AI research functioned mainly as a branch of cognitive science, and its ‘big questions’ were traditionally the philosophical ones that have been around at least since René Descartes and Thomas Hobbes in the 17th century: can a machine think? Are we thinking machines?
However, recent developments in AI – in its application and underlying technology – have led to a pivot away from these somewhat abstract issues and towards a different set of philosophical questions that concern ethics, responsibility and legal regulation.
Understanding AI
AI first got its name in 1956, when computer scientist John McCarthy organised the Dartmouth Summer Research Project on Artificial Intelligence and the last two words stuck. The focus of that conference was “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (quotation taken from the conference’s original funding proposal) and thus placed AI at the heart of cognitive science. Similarly, Alan Turing’s famous 1950 essay ‘Computing Machinery and Intelligence’ (bit.ly/2SNAdAd), in which he proposed the now-eponymous ‘Turing test’, was first published in the philosophy journal Mind.
As a result of these early developments, philosophers have generally distinguished between four ways of understanding what AI is. First there is ‘non-psychological’ AI. Such systems can be understood simply as applied AI technology, providing us with automated tools for accomplishing specific tasks. They do things that would require intelligence if they were done by humans – ‘dirty, dangerous, difficult, or dull’ tasks such as alphabetising lists, air-traffic control, and production-line assembly – but they need not have any broader implications for our understanding of how the mind works.
Second, so-called ‘weak’ AI, by contrast, is a kind of theoretical psychology: we construct theories of human cognition by using concepts from fields such as computer science, and test them by implementing them in non-biological mechanisms. Examples include AI models of learning, perception and language, which have been developed in order to better understand how humans display such abilities and how the biological brain might implement them. The weak AI approach does not make concrete claims about whether AI systems actually have minds; it is best understood as a method for investigating human psychology, which employs broadly mechanical or computational explanations.
Third, ‘strong’ AI can be understood as a specific hypothesis (or even a goal). It is the claim that an appropriately programmed computer (or other machine) really would have mental states and cognitive processes in the same way that humans do. It is comparatively rare to find AI practitioners seriously making such claims about the models that have been built so far, but this conception of AI can nonetheless be found in some of the more sensationalistic popular reporting of its most visible successes, in Hollywood depictions of AI, and in speculation about what the future of AI may hold.
Finally, there is what some philosophers have called ‘supra-psychological AI’. According to advocates of this view, traditional AI has been too anthropocentric in virtue of its focus on the comparison with human intelligence; in principle, there could be other non-biological forms of cognition that go beyond human capabilities. On one hand, this is a natural extension of strong AI: the claim that not only could non-biological machines think in the same way we do or can, but could also think in ways we do not or cannot. On the other hand, this approach also motivates concerns about potential risks of artificial superintelligence (in other words, machines that exceed the capacity of human cognition) that we do not fully understand or cannot fully control.
Until the last decade or so, AI work tended to focus on weak and strong AI. This is to be expected – given Dretkse’s maxim – since it’s these two approaches to AI that have the most obvious connections to cognitive science. However, recent developments have led to a significant departure from this historical precedent, both in the approaches to AI that have been adopted, and in the main philosophical questions that follow.
Changing focus
Novel forms of machine learning have employed computational techniques that are substantially faster and more powerful than the human mind and traditional algorithms. In addition, ‘big data’ technologies now allow for the collection, storage and processing of quantities of information far beyond what the brain could ever manage. As a result, AI’s connection to cognitive science and human psychology has become much less significant; the focus is on non-psychological and supra-psychological AI, and the philosophical questions are ethical ones concerning what we ought to do with these technologies and how we should regulate them.
It’s not too much of a stretch to see the AI involved in self-driving cars, automatic machine translators and ‘recommender systems’ (for example in retail or entertainment) as falling into the ‘non-psychological’ category. We don’t really care whether such AI systems accomplish these tasks using the same kinds of processes that a human would: what really matters is that they do so successfully, so we don’t have to. But as with any other technology, we do care about whether they can do so safely and fairly, and with clear procedures in place to avoid (literally) encoding biases in the datasets.
We also need ways to assign responsibility (both legal and moral) when things go wrong. Philosophers concerned with AI have begun to focus on these ethical questions, too.
By contrast, AI systems for facial recognition, medical diagnosis and risk calculation (for example concerning credit-scoring or criminal recidivism) could be regarded as falling into the ‘supra-psychological’ category, insofar as they often go beyond human capabilities. In these cases, demands for transparency and ‘explainability’ have become significant concerns, as we try to avoid handing over significant decisions to mysterious black boxes whose workings we do not fully understand. (For more on this issue, see my previous piece on AI for The Actuary at bit.ly/2TveRrC).
And ethical questions about the right to challenge the judgments made by AI systems echo the legal right to cross-examine witnesses in a court of law.
The Future
One final philosophical debate that’s starting to emerge is the question of how to understand the future relationship between humans and the AI systems we have created. Should we regard AI merely as a set of tools –like screwdrivers and pocket calculators – that we put to use, as necessary, in order to accomplish various tasks more efficiently? Or should we regard AI systems as new-and- improved replacements for the humans who currently do those jobs (with all of the consequent worries about the knock-on effect of automation on employment)? Even this may be something of a false dichotomy: perhaps it would be better to think of human-AI interaction as giving rise to novel forms of collaboration between different kinds of expert.
You may recall the oft- quoted line from Jurassic Park where Dr Ian Malcolm (played by Jeff Goldblum) worries that “scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”. That may have been true when it came to the question of how (or whether) to reanimate dinosaurs, but scientists and philosophers of AI are now very much concerned with the latter. With the recent publication of new EU proposals for the legal regulation of AI technologies (bit.ly/3idWbqF) – especially for systems that manipulate human behaviour or use biometric data (such as facial recognition) for generalised surveillance or social scoring – that ethical concern with what AI should be doing looks likely to continue into the foreseeable future.
Dr Joel Walmsley is a philosopher at University College Cork, Ireland.