Skip to main content
The Actuary: The magazine of the Institute and Faculty of Actuaries - return to the homepage Logo of The Actuary website
  • Search
  • Visit The Actuary Magazine on Facebook
  • Visit The Actuary Magazine on LinkedIn
  • Visit @TheActuaryMag on Twitter
Visit the website of the Institute and Faculty of Actuaries Logo of the Institute and Faculty of Actuaries

Main navigation

  • News
  • Features
    • General Features
    • Interviews
    • Students
    • Opinion
  • Topics
  • Knowledge
    • Business Skills
    • Careers
    • Events
    • Predictions by The Actuary
    • Whitepapers
    • Moody's - Climate Risk Insurers series
    • Webinars
    • Podcasts
  • Jobs
  • IFoA
    • CEO Comment
    • IFoA News
    • People & Social News
    • President Comment
  • Archive
Quick links:
  • Home
  • The Actuary Issues
  • July 2021
General Features

Thinking machines

Open-access content Wednesday 7th July 2021
Authors
JOEL WALMSLEY

Joel Walmsley writes about the past, present and future of artificial intelligence from a philosophical point of view

web_p22-23_T

The philosopher Fred Dretske once wrote: “If you can’t build one, you don’t know how it works.” For most of its history, artificial intelligence (AI) followed this maxim by applying it to questions about the mind: you can’t truly understand how the mind works, the thought goes, without having some idea of how to construct a machine that actually has one. As a result,

AI research functioned mainly as a branch of cognitive science, and its ‘big questions’ were traditionally the philosophical ones that have been around at least since René Descartes and Thomas Hobbes in the 17th century: can a machine think? Are we thinking machines?

However, recent developments in AI – in its application and underlying technology – have led to a pivot away from these somewhat abstract issues and towards a different set of philosophical questions that concern ethics, responsibility and legal regulation.

Understanding AI

AI first got its name in 1956, when computer scientist John McCarthy organised the Dartmouth Summer Research Project on Artificial Intelligence and the last two words stuck. The focus of that conference was “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (quotation taken from the conference’s original funding proposal) and thus placed AI at the heart of cognitive science. Similarly, Alan Turing’s famous 1950 essay ‘Computing Machinery and Intelligence’ (bit.ly/2SNAdAd), in which he proposed the now-eponymous ‘Turing test’, was first published in the philosophy journal Mind.

As a result of these early developments, philosophers have generally distinguished between four ways of understanding what AI is. First there is ‘non-psychological’ AI. Such systems can be understood simply as applied AI technology, providing us with automated tools for accomplishing specific tasks. They do things that would require intelligence if they were done by humans – ‘dirty, dangerous, difficult, or dull’ tasks such as alphabetising lists, air-traffic control, and production-line assembly – but they need not have any broader implications for our understanding of how the mind works.

Second, so-called ‘weak’ AI, by contrast, is a kind of theoretical psychology: we construct theories of human cognition by using concepts from fields such as computer science, and test them by implementing them in non-biological mechanisms. Examples include AI models of learning, perception and language, which have been developed in order to better understand how humans display such abilities and how the biological brain might implement them. The weak AI approach does not make concrete claims about whether AI systems actually have minds; it is best understood as a method for investigating human psychology, which employs broadly mechanical or computational explanations.

Third, ‘strong’ AI can be understood as a specific hypothesis (or even a goal). It is the claim that an appropriately programmed computer (or other machine) really would have mental states and cognitive processes in the same way that humans do. It is comparatively rare to find AI practitioners seriously making such claims about the models that have been built so far, but this conception of AI can nonetheless be found in some of the more sensationalistic popular reporting of its most visible successes, in Hollywood depictions of AI, and in speculation about what the future of AI may hold.

Finally, there is what some philosophers have called ‘supra-psychological AI’. According to advocates of this view, traditional AI has been too anthropocentric in virtue of its focus on the comparison with human intelligence; in principle, there could be other non-biological forms of cognition that go beyond human capabilities. On one hand, this is a natural extension of strong AI: the claim that not only could non-biological machines think in the same way we do or can, but could also think in ways we do not or cannot. On the other hand, this approach also motivates concerns about potential risks of artificial superintelligence (in other words, machines that exceed the capacity of human cognition) that we do not fully understand or cannot fully control.

Until the last decade or so, AI work tended to focus on weak and strong AI. This is to be expected – given Dretkse’s maxim – since it’s these two approaches to AI that have the most obvious connections to cognitive science. However, recent developments have led to a significant departure from this historical precedent, both in the approaches to AI that have been adopted, and in the main philosophical questions that follow.

Changing focus

Novel forms of machine learning have employed computational techniques that are substantially faster and more powerful than the human mind and traditional algorithms. In addition, ‘big data’ technologies now allow for the collection, storage and processing of quantities of information far beyond what the brain could ever manage. As a result, AI’s connection to cognitive science and human psychology has become much less significant; the focus is on non-psychological and supra-psychological AI, and the philosophical questions are ethical ones concerning what we ought to do with these technologies and how we should regulate them.

It’s not too much of a stretch to see the AI involved in self-driving cars, automatic machine translators and ‘recommender systems’ (for example in retail or entertainment) as falling into the ‘non-psychological’ category. We don’t really care whether such AI systems accomplish these tasks using the same kinds of processes that a human would: what really matters is that they do so successfully, so we don’t have to. But as with any other technology, we do care about whether they can do so safely and fairly, and with clear procedures in place to avoid (literally) encoding biases in the datasets.

We also need ways to assign responsibility (both legal and moral) when things go wrong. Philosophers concerned with AI have begun to focus on these ethical questions, too.

By contrast, AI systems for facial recognition, medical diagnosis and risk calculation (for example concerning credit-scoring or criminal recidivism) could be regarded as falling into the ‘supra-psychological’ category, insofar as they often go beyond human capabilities. In these cases, demands for transparency and ‘explainability’ have become significant concerns, as we try to avoid handing over significant decisions to mysterious black boxes whose workings we do not fully understand. (For more on this issue, see my previous piece on AI for The Actuary at bit.ly/2TveRrC).

And ethical questions about the right to challenge the judgments made by AI systems echo the legal right to cross-examine witnesses in a court of law.


The Future

One final philosophical debate that’s starting to emerge is the question of how to understand the future relationship between humans and the AI systems we have created. Should we regard AI merely as a set of tools –like screwdrivers and pocket calculators – that we put to use, as necessary, in order to accomplish various tasks more efficiently? Or should we regard AI systems as new-and- improved replacements for the humans who currently do those jobs (with all of the consequent worries about the knock-on effect of automation on employment)? Even this may be something of a false dichotomy: perhaps it would be better to think of human-AI interaction as giving rise to novel forms of collaboration between different kinds of expert.

You may recall the oft- quoted line from Jurassic Park where Dr Ian Malcolm (played by Jeff Goldblum) worries that “scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should”. That may have been true when it came to the question of how (or whether) to reanimate dinosaurs, but scientists and philosophers of AI are now very much concerned with the latter. With the recent publication of new EU proposals for the legal regulation of AI technologies (bit.ly/3idWbqF) – especially for systems that manipulate human behaviour or use biometric data (such as facial recognition) for generalised surveillance or social scoring – that ethical concern with what AI should be doing looks likely to continue into the foreseeable future.


Dr Joel Walmsley is a philosopher at University College Cork, Ireland.

Image credit | iStock

ACT Jul21_Full.jpg
This article appeared in our July 2021 issue of The Actuary .
Click here to view this issue

You may also be interested in...

web_p19_Life-insurance_Green-tree-with-barcode-numbers_CREDIT_Getty_470621107.jpg

The price is righter

Alastair Black looks at how the increasing use of analytics is affecting life insurance pricing
Wednesday 7th July 2021
Open-access content
web_p16-17_Insuretech_laptop-with-colour-streams_CREDIT_Getty_161098250.jpg

Get in on the data action

Edward Plowman explains some of the developments currently taking place in insurtech, and why actuaries must embrace this change if they are to succeed
Wednesday 7th July 2021
Open-access content
A decentralised future for financial technologies

A decentralised future for financial technologies

Actuaries can play a major role in helping decentralised financial technologies reach their full potential, says Zhixin Li
Wednesday 7th July 2021
Open-access content
web_p24-

Supervised learning techniques in claims frequency modelling

Neptune Jin shares the Data Science Research Working Party’s work looking at the merits of different supervised learning techniques for claim frequency modelling
Wednesday 7th July 2021
Open-access content
web_p10_Careers_CREDIT_shutterstock_1743122909.jpg

Set up for career success

Tessa McAuliffe shares how then IFoA’s Careers Team is looking to support graduate careers during the COVID-19 pandemic – and into the future
Wednesday 7th July 2021
Open-access content
web_p28_Stresses-and-Strain_Tennis-ball-exploding-on-racket_CREDIT_Getty_1226529476.jpg

Importance of stress and scenario testing

Darshan Purmessur and Haedeh Nazari examine the potential scenarios arising from COVID-19 in different insurance classes, and the importance of stress and scenario testing
Wednesday 7th July 2021
Open-access content

Latest from Technology

gc

Free for all

Coding: those who love it can benefit those who don’t by creating open-source tools. Yiannis Parizas outlines two popular data science programming languages, and the simulator he devised and shared
Wednesday 1st March 2023
Open-access content
ty

Data detective

Heard about the chatbot ChatGPT? Artificial intelligence is advancing rapidly, says Arjun Brara – and could soon be used to refine ESG ratings and expose greenwashing
Wednesday 1st March 2023
Open-access content
td

Brain power

The latest microchips mimic cerebral function. Smaller, faster and more efficient than their predecessors, they have the potential to save lives and help insurers, argues Amarnath Suggu
Wednesday 1st March 2023
Open-access content

Latest from General Features

yguk

Is anybody out there?

There’s no point speaking if no one hears you. Effective communication starts with silence – this is the understated art of listening, says Tan Suee Chieh
Thursday 2nd March 2023
Open-access content
ers

By halves

Reducing the pensions gap between men and women is a work in progress – and there’s still a long way to go, with women retiring on 50% less than men, says Alexandra Miles
Thursday 2nd March 2023
Open-access content
web_Question-mark-lightbulbs_credit_iStock-1348235111.png

Figuring it out

Psychologist Wendy Johnson recalls how qualifying as an actuary and running her own consultancy in the US allowed her to overcome shyness and gave her essential skills for life
Wednesday 1st March 2023
Open-access content

Latest from Data Science

gc

Free for all

Coding: those who love it can benefit those who don’t by creating open-source tools. Yiannis Parizas outlines two popular data science programming languages, and the simulator he devised and shared
Wednesday 1st March 2023
Open-access content
il

When 'human' isn't female

It was only last year that the first anatomically correct female crash test dummy was created. With so much data still based on the male perspective, are we truly meeting all consumer needs? Adél Drew discusses her thoughts, based on the book Invisible Women by Caroline Criado Perez
Wednesday 1st February 2023
Open-access content
res

Interview: Tim Harford on the importance of questioning our assumptions

Tim Harford speaks to Ruolin Wang about why it’s so important to slow down and question things from emotive headlines to the numbers and algorithms we use in our work
Wednesday 30th November 2022
Open-access content

Latest from JOEL WALMSLEY

web_p19_iStock-1077892282

Crystal clear: AI and transparency

The last few years have seen a number of calls for 'transparency' around the workings of the latest artificial intelligence systems.
Wednesday 4th March 2020
Open-access content

Latest from July 2021

Choose your own adventure

Choose your own adventure

Bradley Shearer and Jane Barrett on building a satisfying career throughout your working life
Wednesday 7th July 2021
Open-access content
A different perspective of diversity

A different perspective of diversity

How can we be more supportive of neurodivergent employees in the workplace? Chika Aghadiuno explains
Wednesday 7th July 2021
Open-access content
r

Uncertain terms: how we think about risk

Paul Sweeting reflects on the way we think about, and distinguish between, different types of risk
Wednesday 7th July 2021
Open-access content
Share
  • Twitter
  • Facebook
  • Linked in
  • Mail
  • Print

Latest Jobs

Investment Consultant

Scotland / Scotland, Edinburgh / London, England
Up to £70000.00 per annum
Reference
148689

Market Risk Capital Actuary/Quant

London (Central)
£65,000 - £115,000 plus bonus and package
Reference
148688

Experience Analysis Contractor

England
Negotiable
Reference
148687
See all jobs »
 
 
 
 

Sign up to our newsletter

News, jobs and updates

Sign up

Subscribe to The Actuary

Receive the print edition straight to your door

Subscribe
Spread-iPad-slantB-june.png

Topics

  • Data Science
  • Investment
  • Risk & ERM
  • Pensions
  • Environment
  • Soft skills
  • General Insurance
  • Regulation Standards
  • Health care
  • Technology
  • Reinsurance
  • Global
  • Life insurance
​
FOLLOW US
The Actuary on LinkedIn
@TheActuaryMag on Twitter
Facebook: The Actuary Magazine
CONTACT US
The Actuary
Tel: (+44) 020 7880 6200
​

IFoA

About IFoA
Become an actuary
IFoA Events
About membership

Information

Privacy Policy
Terms & Conditions
Cookie Policy
Think Green

Get in touch

Contact us
Advertise with us
Subscribe to The Actuary Magazine
Contribute

The Actuary Jobs

Actuarial job search
Pensions jobs
General insurance jobs
Solvency II jobs

© 2023 The Actuary. The Actuary is published on behalf of the Institute and Faculty of Actuaries by Redactive Publishing Limited. All rights reserved. Reproduction of any part is not allowed without written permission.

Redactive Media Group Ltd, 71-75 Shelton Street, London WC2H 9JQ