I recently attended the IFoAs event The Actuary as a Data Scientist, which gave an interesting window on the professions future.
I recently attended the IFoA's event The Actuary as a Data Scientist, which gave an interesting window on the profession's future. Towards the end of the day, however, a comment was heard from the audience: "Data science is just another tool to which actuaries will adapt, as we have so many others." This is a happy thought, but, sadly, I must disagree.
The core concept of data science is fairly straightforward. Rather than use model-specific statistics to confirm goodness-of-fit, we instead calibrate our models on a proportion of the dataset and reserve the rest for testing. This lets us rapidly iterate over diverse model types, untethering our analysis from a single approach's preconceptions. So far, so good.
However, data science is itself the thin end of a huge wedge labelled 'computer science'. In recent decades, actuaries have largely failed to adopt new best practices emerging from the IT world primarily because Excel discourages their use! Examples include: version control, structured programming, modularisation, data normalisation, performance optimisation, unit testing, reproducibility, deployment and bug tracking.
Now that R and Python have emerged as practical challengers to Excel, those chickens are coming home to roost. Actuaries may finally be compelled to recognise that, in many cases, their routine modelling work is a form of software development, and that - viewed as software, not just as analysis - that work is often mediocre at best and unmaintainable at worst.
Escaping this hole is possible for both individual actuaries and entire teams, and in the long run is likely to deliver a substantial dividend of agility and efficiency. However, adapting to not just the new tools but the habits of the profession that created them will require discipline, investment and foresight.
22 November 2018