How can managers best understand their firms prevailing level of risk? The conventional answer is by tracking good indicators. But choosing these is not straightforward, says Paul Harwood

Experience of performance management provides some useful insight when tracking good risk indicators.
Comparing actual performance with simple targets is an obvious approach to providing numeric management information (MI). What's more, targets can be motivating. They ensure that everyone knows what is important and where they stand.
Unfortunately, the temptation to game simple targets often proves too great. It leads to widespread managing of the metric rather than the outcome, especially when bonuses are involved. Quality suffers. It turns out that many jobs are subtler than simple targets assume. Experience, operator autonomy and redundancy have hidden benefits.
This result is captured in Goodhart's law: roughly, a metric loses its capacity for insight once it is recognised as management information. The act of observing changes the outcome. An objective measure becomes more subjective once it is recognised as important.
The 'balanced scorecard' supposedly captures the subtleties at the expense of more complexity. Yet it turns out that one thing less motivating than a single simple target is a complicated nuanced one. Research shows that complex incentives don't work well. People simply don't trust them, or can't be bothered trying to adjust their behaviour to satisfy well-meaning but convoluted approaches.
The true story
These impacts have led to the too-familiar situation of risk metrics telling one tale when reality is different. The PRA's Andrew Bulley highlighted this in a 2016 speech to industry. He noted that in pre-crisis banking, risk-weighted assets (the metric) dropped by almost half, while leverage (the reality) increased dramatically. The missing element is the sensitive understanding of the context.
There are plenty of other examples. Operational risk reporting is a good one. Boards demand that operational risk incidents be counted and reported. Their attitude to the results is instructive. Should the number of reported incidents fall or rise over time?
If the trend is rising, is this bad news, or evidence of increasing risk exposure? Or does it show that people are actively finding more incidents to report, that they are becoming more assiduous in reporting, including smaller, rarer incidents? This is the apotheosis of an engaged risk culture. But explaining this to a board can be difficult, and possibly career-limiting when there is a high-level imperative to reduce reported risk levels.
Objective thinking
Are fewer reported incidents a good sign? Possibly. Yet thoughtful consideration of a reducing trend in reported incidents might suggest that issues are being classified away (that reporting threshold is too sensitive or that risk is too small for example) or that the backlash to reporting is so severe, it's easier to ignore the issue. Maybe people report at the rate that they can stand. In short, counting errors serves no risk intelligence purpose whatsoever if the recording process (the subjective part) corrupts the data to a greater extent than it reflects the underlying riskiness (the objective part).
Thus fuzzier, judgment-reliant, explicitly subjective measures have their place, particularly from people close to the action. There are nuances here too. A familiar idea in business, if not all human relationships, is the degree to which it is politic to tell the boss/customer/colleague what they want to hear. This desire to please, to relay good news, to ignore the ugly stuff, doesn't make for effective assessment.
Where does this leave risk managers? With three warnings. First, beware the obvious metrics, where behaviour changes distort effectiveness. Second, just because something can be measured, doesn't mean that it reflects reality. Third, asking people to judge is fine, provided there is allowance for the natural bias in reporting.
How then to solve the mystery of risk measurement? Risk managers have to weave a complex tapestry by combining multiple information sources, some weak, some stronger, regular and irregular, formal and informal. The risk manager's skills include continual checking, reconciliation, re-testing and re-appraisal. Understanding and interpreting the context is all important as is an understanding of the people factors that affect both activity and its reporting.
Actuaries who understand the moving parts across an entire organisation are well placed for this work, which is truly enterprise-wide risk management.
This is the first in a short series of articles commissioned by The Actuary to demonstrate risk management in practice. Read about the agenda of the IFoA’s Risk Management Board.