Jerome Nollet and Karen Usher introduce the work of the Artificial Intelligence Ethics Risk Working Party, and why actuaries need to make themselves aware of the risks in this area

Sir Derek Morris’s 2004 report on the failure of Equitable Insurance placed a significant burden on the actuarial profession.
The report’s principal points centred on actuaries’ ability to explain the methods and approaches that lead to results and decisions, as well as a lack of sufficient transparency in actuarial advice. It not only led to the creation of a board for actuarial standards and the Technical Actuarial Standards (TASs), but was also, in the view of many, the catalyst for actuaries’ entry into risk management. It made sense, and we must applaud the profession’s contribution to risk management – which continues today, with many actuaries taking the lead to assess, explain, mitigate and optimise risk-taking in their organisations.
However, these trends were created not with foresight, but in reaction to historical failures and weaknesses. Today we see a potential parallel with the extremely rapid expansion of artificial intelligence (AI) in many industries, and there has so far been limited actuarial contribution in assessing and managing its related risks.
Do we need another Equitable to wake us up?
What should be of greatest concern is the actuarial profession’s general lack of awareness that the replacement of human control is at the very core of AI. Traditional reserving or pricing models are tools that allow actuaries to measure a risk, to communicate the range of possible outcomes, identify tail risks and, in the end, make decisions. AI models, on the other hand, are making the decisions themselves.
AI models go beyond human intelligence, opening a whole new world of possibilities and potential outcomes. However, explaining the output of AI models will often pose an enormous challenge – especially when there has been a significant failure. And if we cannot explain the output, how can we ensure it is reasonable, relevant or ethical?
The ability to measure and explain outcomes is key to risk mitigation. The complexity of AI models will make this a key challenge, so it is something actuaries should start focusing on urgently. If a major AI scandal takes place – for example a model discriminates against a particular population, or confidential information is released, or a simple but key error in the model leads to significant losses – one can be sure that the actuary will be held responsible where they work.
Fortunately, actuaries are skilled at identifying potential risks, even when they are not AI experts. The DNA of the actuary is analysing models, ranges and risks – and they can do so in a much wider area than just traditional insurance, helping a wide range of industries.
The US$500m failure of the Zillow online marketplace is a perfect example of a company abandoning its decisions to an AI model that then produced the wrong outcomes. If you search ‘Zillow’ online, you will find plenty of articles stating that the company’s algorithms were wrong, as if a little bad programming was the total cause of the failure.
However, because of their training, actuaries will recognise that the main cause of the failure was not an algorithm error but a classic case of adverse selection. For example, if Zillow’s AI model prices my house above what I know is its real value because it is not aware of my new noisy neighbours, then I will sell it to the company. On the other hand, if Zillow’s model is unaware of the fact that the new neighbour is a rock star, I will sell my house through a local broker, who will get me a higher price. An actuary would have identified this risk and saved Zillow millions, and this is why it is so important that actuaries get involved with AI.
The AI Ethics Risk Working Party
Aware of both the risks and the opportunities for the profession, the IFoA has created the AI Ethics Risk Working Party, which has a wide mandate in order to look at multiple aspects of the topic. The hope is that this will encourage actuaries to get actively involved in matters related to AI. The working party is currently building a taxonomy of AI ethics risks and will soon be proposing additions to the existing TASs to include ethics risks. We will also examine themes such as actuarial qualifications and perhaps an ethics label, and hope to lead an event related to AI risks towards the end of the year.
Please contact the working party with your ideas and suggestions. We would particularly welcome stories about actuarial involvement in driving actions to ensure ethical implementation and proper risk management in AI; if you have such a story, please contact us.
Jerome Nollet served four years on the Financial Reporting Council’s Board for Actuarial Standards. He is a corporate finance adviser, and founder and deputy chair of the IFoA AI Ethics Risk Working Party
Karen Usher is a retired risk management professional and a member of the AI Ethics Risk Working Party