[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
s
.

The computer will see you now 

07 MARCH 2019 | GEORGE BURTON

Student

Artificial intelligence (AI) is increasingly used to make healthcare decisions; no longer confined to the realms of The Matrix, but showing a genuine practical emergence in the world of medical diagnosis. It seems AI really can approximate conclusions in healthcare using cognition without direct human input. So, as actuaries of the future, should we all be prescribing a high dose of the Machine Learning chapter from CS2?

I class ‘AI and healthcare’ in the same category as ‘big data’ – exciting, but worthy of scepticism. These technologies have certainly provided benefits, such as reduced medical costs and an extended distribution of healthcare, to name a couple. I’m certainly not a medical professional, but these patient outcomes are difficult to ignore. 

Just because you can do something, though, doesn’t mean you should. 

The peer review process of the scientific method is the long-established basis for the ‘do no harm’ philosophy of medicine – the doctors’ equivalent of the actuarial control cycle. Given commercial pressures to innovate, it is difficult to imagine Silicon Valley suddenly rushing to adopt this level of control.  

The combination of a patient’s medical history and ‘learned’ knowledge could lead to significant advances in the personalisation of healthcare. Given the myriad of competing commercial interests facing firms, how would associated responsibility be governed? For example, would you be happy to sell your data for insurance price discrimination purposes? (Perhaps one best left for the regulators.)

Machines are also famously unsympathetic. You’ll know this feeling if, like me, you have spent more hours than you’d care to recall screaming at Excel. Imagine the concern in question isn’t a relatively trivial calculation, but a cancer diagnosis result. Would we be comfortable with this – and at what point would we want emotional intelligence to intervene?   

Shortly behind pragmatic concerns, accountability considerations come to mind. I’m sure you’ll agree that teachers have an influence on the way we learn – and the deep learning methods of AI are no different. With the wrong teachings come unintended biases and incorrect conclusions. If a machine repeatedly reasons incorrectly, who do we hold accountable? 

Teachers also drill us with the importance of ‘showing our working’. Many experts have argued that some AI neural networks and deep learning, on the other hand, will create a ‘black box’ – a system where inputs and outputs are known, but without any knowledge of internal workings. Deep learning is largely a system of trial and error; a series of tests around a pre-defined goal. How do we explain to patients the process by which the deep learning algorithm has come to their diagnosis, based on a complex medical history? For many, AI represents a new lease of life for a stretched healthcare system. For the sceptics, however, it represents an uncertain risk that could leave machines ‘playing god’. 

Actuaries have always worked with uncertain futures. Imagine a post-‘2019 curriculum’ world where the role of actuaries has changed and our skills are increasingly recognised across a wide range of fields. This may not be merely another permutation of existing actuarial fields, but a larger paradigm shift. I firmly believe actuaries are well positioned to navigate this commercially focused, judgment-led technological future. 

The only certainty is uncertainty. Considering the current pragmatic and ethical hurdles, it may be a little while before ‘the computer will see you now’.

George Burton
George Burton

Configure your Portal