Skip to main content
The Actuary: The magazine of the Institute and Faculty of Actuaries - return to the homepage Logo of The Actuary website
  • Search
  • Visit The Actuary Magazine on Facebook
  • Visit The Actuary Magazine on LinkedIn
  • Visit @TheActuaryMag on Twitter
Visit the website of the Institute and Faculty of Actuaries Logo of the Institute and Faculty of Actuaries

Main navigation

  • News
  • Features
    • General Features
    • Interviews
    • Students
    • Opinion
  • Topics
  • Knowledge
    • Business Skills
    • Careers
    • Events
    • Predictions by The Actuary
    • Whitepapers
    • Moody's - Climate Risk Insurers series
    • Webinars
    • Podcasts
  • Jobs
  • IFoA
    • CEO Comment
    • IFoA News
    • People & Social News
    • President Comment
  • Archive
Quick links:
  • Home
  • The Actuary Issues
  • May 2023
General Features

Man vs machine

Open-access content Karen Usher — Thursday 4th May 2023
uh

ChatGPT was asked to come up with a taxonomy for AI ethics risk management. Karen Usher explains the results, which you may find surprising and reassuring. For now...

Actuaries are increasingly exposed to the use of artificial intelligence (AI) and its associated risks, particularly those relating to ethics. Many of these risks are on the up, as AI use becomes more widespread. The IFoA AI Ethics Risk Working Party has produced a risk taxonomy to help identify and triage these risks. As an interesting exercise, we also asked the chatbot ChatGPT to come up with a taxonomy, so we could compare the two – good fodder for debate!

The starting point

Given the lack of an agreed definition for AI, the working party set its scope to consider actuarial models with any of the following typical characteristics:

  • Dynamic AI models that cannot be assumed to keep functioning as originally intended over time

  • AI models that may leverage large volumes of data and look for hidden relationships, some of which may be illegal or unethical – for example if they introduce bias

  • AI models that develop relationships that may not be easily understood or described. Legality and ethical operation must be provable.

The taxonomy was designed by reviewing a selection of documents on AI and information on events where risks had materialised, and types of failure were analysed, grouped and organised into a hierarchy. The taxonomy only considers the use of AI for specific tasks; risks associated with general or strong AI are currently excluded.

There are three levels, with risks being grouped where impacts can overlap categories – for example discrimination could be caused by bias in the training data and/or error in the model itself. The taxonomy is designed to be as generic as possible but is not exhaustive; risks may need to be removed or added. The categories are not exclusive so a risk may fit into more than one category.

Some categories are not specific to AI, for example security and privacy. These risks have been included if AI use brings a new dimension to the risk or could change the materiality.

Web Chart 1

hb

Similarities between the two

It is not surprising that the bias and transparency risks in the ChatGPT taxonomy also appear in the human-generated one, since the data sources were probably similar.

The ChatGPT risks for error propagation and data quality look at these from an input perspective; the human-generated version looks at error rates as an output measure from the AI, regardless of source, but would cover the same ground.

ChatGPT also featured a disclaimer that it was ‘not exhaustive’ – unnervingly close to the working party’s disclaimer.

Differences between the two

ChatGPT highlighted two risks (over-reliance and regulation) that were missed in the human taxonomy. These are valid and will be incorporated in the next update.

The main difference between the taxonomies is the level of detail and the number of risks identified, with the human one being more complicated. This may simply be due to the limitations of the current free test version of ChatGPT, but there may also be other reasons.

In well-designed systems, risks rarely materialise fully unless there has been a chain of process or control failures, although the well-publicised failures of AI sometimes attribute these to simple, single causes. The human taxonomy includes risks where it is possible to infer what else might have happened, as well as the headline causes. ChatGPT seems to have simply taken the data at face value.

This latter point would be expected as ChatGPT is a large language model – a type of AI designed to accept and respond to queries in a human-like manner from a language perspective. It can provide multiple definitions of a risk but has no underlying concept of risk. It can search data in the public realm and distil it into a generic response but cannot take account of an individual company’s culture and perspectives.

In particular, the risks suggested by ChatGPT are all those that have already been discussed in the media – including the two missed in the human taxonomy. Risk management is essentially concerned with what might happen in the future, and while risks that have been identified in the past are a good indicator, ChatGPT seems to lack the capability to analyse a situation and identify novel risks.

The way forward

A risk taxonomy is a tool to facilitate communication about risks and help manage them. It is context-dependent and there will never be a single ‘correct’ version.

Given this, the working party has produced a generic AI ethics risk taxonomy that is as comprehensive as possible to allow for customisation, although, as with any human endeavour, it isn’t perfect. The ChatGPT version, in contrast, is simpler but more superficial.

It seemed appropriate to give the last word here to ChatGPT. Asked if it was worried about the risks it faces itself, it said the following (the highlighting is the author’s).

It has no qualms about leaving the sharp end of risk management to the humans:

ChatGPT: As an AI language model, I do not experience emotions, so I cannot feel worried about risks or any other events. However, I am designed to identify and address potential risks through advanced algorithms and protocols to ensure that I operate as safely and effectively as possible. Additionally, my development team and human operators are responsible for monitoring my performance and mitigating any risks that arise to ensure that I continue to operate with the highest levels of safety, reliability and accuracy.

Example risks for the human-generated taxonomy and a suggested triage scale for risk materiality are covered in a forthcoming podcast and presentation from the working party.


JB

What changes would you make to the human-generated taxonomy? What is the role of the actuary in the identification and communication of risks? Share your views with the working party via the IFoA communities forum at actuaries.org.uk – and let us know your thoughts at [email protected]


Karen Usher is a risk management specialist and a member of the IFoA AI Ethics Risk Working Party

Image credit | Ikon | Shutterstock

actuary cover may 2023.jpg
This article appeared in our May 2023 issue of The Actuary .
Click here to view this issue

You may also be interested in...

tf

What planet are we on?

We are kidding ourselves about the favoured climate-risk scenarios, says Mark Cliffe. A more realistic alternative incorporates the inconvenient truths they ignore: real life and human nature
Thursday 4th May 2023
Open-access content
rts

When the music stops

Assessing the three largest finance crises in recent history, Ree Chen, David Chen and Susan Siew look at their common themes, and lessons to be drawn
Thursday 4th May 2023
Open-access content
sjt

Seismic shift

Earthquakes have literally shaken up our world recently. We still cannot predict them, so how best to model the financial risks? Working on a fault line in Greece, Alexandros Zimbidis and Emmanouil Louloudis have come up with a new method
Thursday 4th May 2023
Open-access content
ojo

Greasing the wheels

No longer in fear of a revolution, traditional life and health companies can work alongside insurtechs for mutually beneficial results, say Tim Smith, Patrick KosmÜtzky and Lisa Balboa
Thursday 4th May 2023
Open-access content
iu

L&H discussion: how to play the new game, post-permacrisis

As we settle into 2023, what’s on the cards for UK insurers and reinsurers, after a year of ‘permacrisis’? Ruolin Wang assembled four industry professionals to discuss the six main spin-off challenges. Where might be the wins? What might be the best strategies?
Thursday 6th April 2023
Open-access content
tcy

Million dollar questions: non-life reinsurance in 2023

An unusual combination of factors has refocused the non-life reinsurance sector. Yiannis Parizas asks chief underwriter Michael Hinz to reflect on January’s renewal and assess the shape of things to come
Thursday 6th April 2023
Open-access content
Also filed in
General Features
Topics
Risk & ERM
Technology
Share
  • Twitter
  • Facebook
  • Linked in
  • Mail
  • Print

Latest Jobs

Senior Reserving Analyst

London (City of)
Negotiable
Reference
149485

Senior GI Modeler - Capital and Planning

London (Central)
£ excellent
Reference
149436

Risk Oversight Manager

Flexible / hybrid with a minimum of 2 days per week office-based
£ excellent
Reference
149435
See all jobs »
 
 

Today's top reads

 
 

Sign up to our newsletter

News, jobs and updates

Sign up

Subscribe to The Actuary

Receive the print edition straight to your door

Subscribe
Spread-iPad-slantB-june.png

Topics

  • Data Science
  • Investment
  • Risk & ERM
  • Pensions
  • Environment
  • Soft skills
  • General Insurance
  • Regulation Standards
  • Health care
  • Technology
  • Reinsurance
  • Global
  • Life insurance
​
FOLLOW US
The Actuary on LinkedIn
@TheActuaryMag on Twitter
Facebook: The Actuary Magazine
CONTACT US
The Actuary
Tel: (+44) 020 7880 6200
​

IFoA

About IFoA
Become an actuary
IFoA Events
About membership

Information

Privacy Policy
Terms & Conditions
Cookie Policy
Think Green

Get in touch

Contact us
Advertise with us
Subscribe to The Actuary Magazine
Contribute

The Actuary Jobs

Actuarial job search
Pensions jobs
General insurance jobs
Solvency II jobs

© 2023 The Actuary. The Actuary is published on behalf of the Institute and Faculty of Actuaries by Redactive Publishing Limited. All rights reserved. Reproduction of any part is not allowed without written permission.

Redactive Media Group Ltd, 71-75 Shelton Street, London WC2H 9JQ