Skip to main content
The Actuary: The magazine of the Institute and Faculty of Actuaries - return to the homepage Logo of The Actuary website
  • Search
  • Visit The Actuary Magazine on Facebook
  • Visit The Actuary Magazine on LinkedIn
  • Visit @TheActuaryMag on Twitter
Visit the website of the Institute and Faculty of Actuaries Logo of the Institute and Faculty of Actuaries

Main navigation

  • News
  • Features
    • General Features
    • Interviews
    • Students
    • Opinion
  • Topics
  • Knowledge
    • Business Skills
    • Careers
    • Events
    • Predictions by The Actuary
    • Whitepapers
    • Moody's - Climate Risk Insurers series
    • Webinars
    • Podcasts
  • Jobs
  • IFoA
    • CEO Comment
    • IFoA News
    • People & Social News
    • President Comment
  • Archive
Quick links:
  • Home
  • Sections
  • General Features

Actuarial intelligence needed for artificial intelligence

Open-access content Thursday 18th August 2022 — updated 1.30pm, Friday 2nd September 2022

Jerome Nollet and Karen Usher introduce the work of the Artificial Intelligence Ethics Risk Working Party, and why actuaries need to make themselves aware of the risks in this area

web_Artificial-Intelligence_credit_Andrey-Suslov_shutterstock_1648661566.png

Sir Derek Morris’s 2004 report on the failure of Equitable Insurance placed a significant burden on the actuarial profession. 

The report’s principal points centred on actuaries’ ability to explain the methods and approaches that lead to results and decisions, as well as a lack of sufficient transparency in actuarial advice. It not only led to the creation of a board for actuarial standards and the Technical Actuarial Standards (TASs), but was also, in the view of many, the catalyst for actuaries’ entry into risk management. It made sense, and we must applaud the profession’s contribution to risk management – which continues today, with many actuaries taking the lead to assess, explain, mitigate and optimise risk-taking in their organisations.

However, these trends were created not with foresight, but in reaction to historical failures and weaknesses. Today we see a potential parallel with the extremely rapid expansion of artificial intelligence (AI) in many industries, and there has so far been limited actuarial contribution in assessing and managing its related risks.

Do we need another Equitable to wake us up?

What should be of greatest concern is the actuarial profession’s general lack of awareness that the replacement of human control is at the very core of AI. Traditional reserving or pricing models are tools that allow actuaries to measure a risk, to communicate the range of possible outcomes, identify tail risks and, in the end, make decisions. AI models, on the other hand, are making the decisions themselves.

AI models go beyond human intelligence, opening a whole new world of possibilities and potential outcomes. However, explaining the output of AI models will often pose an enormous challenge – especially when there has been a significant failure. And if we cannot explain the output, how can we ensure it is reasonable, relevant or ethical?

The ability to measure and explain outcomes is key to risk mitigation. The complexity of AI models will make this a key challenge, so it is something actuaries should start focusing on urgently. If a major AI scandal takes place – for example a model discriminates against a particular population, or confidential information is released, or a simple but key error in the model leads to significant losses – one can be sure that the actuary will be held responsible where they work.

Fortunately, actuaries are skilled at identifying potential risks, even when they are not AI experts. The DNA of the actuary is analysing models, ranges and risks – and they can do so in a much wider area than just traditional insurance, helping a wide range of industries.

The US$500m failure of the Zillow online marketplace is a perfect example of a company abandoning its decisions to an AI model that then produced the wrong outcomes. If you search ‘Zillow’ online, you will find plenty of articles stating that the company’s algorithms were wrong, as if a little bad programming was the total cause of the failure. 

However, because of their training, actuaries will recognise that the main cause of the failure was not an algorithm error but a classic case of adverse selection. For example, if Zillow’s AI model prices my house above what I know is its real value because it is not aware of my new noisy neighbours, then I will sell it to the company. On the other hand, if Zillow’s model is unaware of the fact that the new neighbour is a rock star, I will sell my house through a local broker, who will get me a higher price. An actuary would have identified this risk and saved Zillow millions, and this is why it is so important that actuaries get involved with AI.

The AI Ethics Risk Working Party

Aware of both the risks and the opportunities for the profession, the IFoA has created the AI Ethics Risk Working Party, which has a wide mandate in order to look at multiple aspects of the topic. The hope is that this will encourage actuaries to get actively involved in matters related to AI. The working party is currently building a taxonomy of AI ethics risks and will soon be proposing additions to the existing TASs to include ethics risks. We will also examine themes such as actuarial qualifications and perhaps an ethics label, and hope to lead an event related to AI risks towards the end of the year.

Please contact the working party with your ideas and suggestions. We would particularly welcome stories about actuarial involvement in driving actions to ensure ethical implementation and proper risk management in AI; if you have such a story, please contact us.

Jerome Nollet served four years on the Financial Reporting Council’s Board for Actuarial Standards. He is a corporate finance adviser, and founder and deputy chair of the IFoA AI Ethics Risk Working Party

Karen Usher is a retired risk management professional and a member of the AI Ethics Risk Working Party

Image credit | Shutterstock

Also filed in
General Features

You might also like...

Share
  • Twitter
  • Facebook
  • Linked in
  • Mail
  • Print

Latest Jobs

Calling all GI Actuaries looking to move into contracting

England, London
£700 - £1000 per day
Reference
146169

A chance to gain capital modelling experience.

London, England
£70000 - £110000 per annum
Reference
146168

Capital Contractor GI

England, London
£700 - £1000 per day
Reference
146166
See all jobs »
 
 

Today's top reads

 
 

Sign up to our newsletter

News, jobs and updates

Sign up

Subscribe to The Actuary

Receive the print edition straight to your door

Subscribe
Spread-iPad-slantB-june.png

Topics

  • Data Science
  • Investment
  • Risk & ERM
  • Pensions
  • Environment
  • Soft skills
  • General Insurance
  • Regulation Standards
  • Health care
  • Technology
  • Reinsurance
  • Global
  • Life insurance
​
FOLLOW US
The Actuary on LinkedIn
@TheActuaryMag on Twitter
Facebook: The Actuary Magazine
CONTACT US
The Actuary
Tel: (+44) 020 7880 6200
​

IFoA

About IFoA
Become an actuary
IFoA Events
About membership

Information

Privacy Policy
Terms & Conditions
Cookie Policy
Think Green

Get in touch

Contact us
Advertise with us
Subscribe to The Actuary Magazine
Contribute

The Actuary Jobs

Actuarial job search
Pensions jobs
General insurance jobs
Solvency II jobs

© 2023 The Actuary. The Actuary is published on behalf of the Institute and Faculty of Actuaries by Redactive Publishing Limited. All rights reserved. Reproduction of any part is not allowed without written permission.

Redactive Media Group Ltd, 71-75 Shelton Street, London WC2H 9JQ