Dr Joseph Lo, Dr Ed Tredger and Bernadette Hlavka discuss the tricks and possible pitfalls when conducting a successful expert judgment elicitation meeting
Actuary: What's your one in 200?
Underwriter: Last year's loss ratio
Actuary: How do you picture a bad year?
Underwriter: Front page news
Actuary: What would surprise you?
Actuary: Do you have any idea how my T-copula will look based on that information?
Underwriter: Get out.
While this may not be exactly the way your last expert judgment elicitation meeting went, it hints at the fact that often actuaries don't always know how best to ask for probabilistic information and the experts being examined know even less on how to provide it.
The elicitation process is critically important, interesting and well within the scope of actuaries. As actuaries in the London market, who engage with such processes on a regular basis, we ambitiously embarked on a project to come up with a best practice approach. Reading through large amounts of material and drawing from our own experience we soon realised that this is an area that needs substantial specific research, possibly with the help of an industry working party, spanning over several years. Nevertheless, during our investigations we found some fundamental ideas that should be considered when beginning an elicitation process.
Current state of play The extent to which expert judgment is relied upon in the London market can hardly be over-estimated. Many pricing models use judgments made by underwriters. The very fabric of capital models depends on eliciting probabilistic assessments from experts with actuaries left to hang their hats on the results. Expert judgments are increasingly subjected to systematic examination within Solvency II and Lloyd's has recently published a report on the cognitive aspects of risk perception.
Without motivated experts who understand the impact of their inputs and facilitators who know how to get the best from their experts, we risk the credibility of actuarial work being seriously undermined. Luckily for actuaries, much of the groundwork has been done in other professions. One example is weather forecasting, which has a long history in providing judgmental probabilistic forecasts. As early as 1906, Earnest Cooke in Western Australia proposed adding judgmental weights to weather forecasts to indicate confidence. Other areas that actuaries can learn from include medical diagnostics, civil engineering, intelligence analysis and new product markets.
There is a wealth of academic literature from a wide range of fields, but few are better starting points than the seminal book Uncertain Judgements: Eliciting Experts' Probabilities by Tony O'Hagan et al. Actuaries can find the most important literature covered here. However, techniques gained from other disciplines require tailoring to the specific needs of our particular industry and market. While we consider a working party will be instrumental to provide robust conclusions, we offer some preliminary ideas.
1. Putting yourself in the expert's shoes
Understanding is often the beginning of communication. In the case of actuaries, we should remember that experts don't often think in terms of one in 200, frequency and severity versus aggregated results, or go to bed at night with a historical list of claims re-based to the next accident year. To make the point with an example, ask yourself how you would answer the question - what is the 90th percentile cashflow counts greater than £50 you might incur in a month? We would have no clue, even though we might be considered experts in this area.
Actuaries need to be well prepared for elicitation meetings and anticipate questions in advance. Coming equipped with claims history and high-level analyses can vastly improve the quality of results. In particular, actual versus expected analyses are helpful for improving judgmental skills. Finally, we note that there can be a tension between an actuary getting to the best estimate and the experts having their eyes on their capital allocation, bonus, or even portfolio make-up.
2. Psychological insights
There is a widely reported tendency for the respondent to exhibit anchoring (a tendency not to move from some baseline, for example a business plan or last year's estimate) and biases. Actuaries should be aware of these and other psychological traits and ask questions accordingly. Questions should be specific, draw on the experts' field of expertise, and not encourage bias. For example: "You have been in the market for 20 years. What is the worst result you have seen so far?", rather than jumping to "What do you think is the one
3. Real-time feedback
In appropriate situations, providing real-time feedback is a powerful way of speeding up the validation cycle. Being able to show the impact of the opinion straight away, rather than going back into a dark room and producing some results a week later, can help people understand the impact of their decision, helping subsequent buy-in on what you are trying to do. The result of this is greater ownership, understanding and confidence of the expert in the actuarial models.
Making use of modelling software such as R, together with a readily available template document such as SHELF from Sheffield University, one can apply interactive elicitation in the real world. Upon the user keying in their judgments (for example, one in 10, one in 200 for frequency and severity), this approach can fit dozens of distributions and run tens of thousands of simulations involving maximum lines and reinsurance structures in a couple of seconds. It can present value at risk (VaR) and tail value at risk (TVaR) statistics, and gross and net results back to the user.
4. An annual cycle
A distinctive feature of actuarial work is to revisit assessments year-on-year. What we have come up with now will feed into next year's process and the year after that. The first exercise is already setting the long-term results. This doesn't mean that the expert cannot change his opinion. In fact, an expert's opinion should be open to revisions as more information emerges. They are justified in changing their opinion in the face of an ever-changing risk environment. However, the actuary and the experts need to be aware of the short-term and long-term implications of the current year's judgment. Given the propensity for anchoring, prior judgments may limit the effectiveness of future elicitations.
These above points are aimed at encouraging discussion and reflection rather than supplying best practice. Actuaries need to draw from state of the art research, yet avoid alienating our experts who we rely upon so heavily.
To a large extent, the future relies on actuaries rising to the challenge and equipping members with the skills and vocabulary needed to obtain and use expert judgment effectively. While this will always be a work in progress, there are several things the actuarial profession can do now to enable actuaries to play a more visible role.
Actuarial training and research programmes often do not focus on facilitating expert judgment elicitation. For example, our CA3 communications exam is only half true to its name: we only formally train and test on how to convey actuarial results, but not to facilitate good communication from our experts.
Empirical research on eliciting expert opinions in the actuarial context is inconclusive. The actuary of the future will be multi-disciplinary, bringing together mathematics, psychology and potentially the social sciences.
An expert opinion elicitation working party would be a useful next step forward. It could examine and define research questions more carefully. It could also consider the educational side to prepare actuaries to more effectively facilitate elicitation of expert opinions.
The authors would like to thank Richard Barke for contributing to the project and Ajay Chhabra for useful discussions. A GIRO working party on eliciting expert probabilities in general insurance is being set up and will be seeking volunteer members. For details, visit the IFoA's volunteer webpage at www.actuaries.org.uk/members/pages/volunteer-vacancies