[Skip to content]

Sign up for our daily newsletter
The Actuary The magazine of the Institute & Faculty of Actuaries
.

IT: Cloud gathers for Solvency II

“In its early days, most computing took place on mainframes. Ever-falling costs led computing to shatter - first into minicomputers, then into personal computers (PCs) and, more recently, hand-held devices. Now communication is catching up with hardware and software and, thanks to cheap broadband and wireless access, the industry is witnessing a pull back to the middle. This is leading much computing to migrate back into huge data centres. Networks of these computing plants form “computing clouds” - vast, amorphous, delocalised nebulae of processing power and storage.” (Economist, 26 June 2008)

This extract from the Economist article was not written with actuaries in mind, but it may as well have been. First there were mainframe actuarial valuation systems and then, owing to the need for greater flexibility and responsiveness to changing requirements, PC-based projection systems evolved which were customised, maintained and run by actuarial departments. Recent demands call for greater control and auditability as well as risk capital calculations that require ever larger amounts of computing hardware. The response by vendors of actuarial systems has been to promote “enterprise wrappers” to support the migration of such systems into data centres controlled by the IT department.

This article explores whether the next evolutionary stage for actuarial systems (as depicted in Figure 1) is the provision of software and hardware on an ‘IT as a service’ basis including the use of ‘cloud computing’ and the implications for actuarial modelling.

Actuarial IT evolution (Figure 1 below)
What is ‘IT as a service’?
The precise definitions and service offerings are still evolving. It is essentially the ability of organisations (and individuals) to access and leverage a range of applications, processing power and storage services that it does not own. In addition, it usually involves a ‘pay as you go’ solution accessed and provided over the web rather than a traditional upfront license and in-house deployment business model.

Whilst the definition can include traditional IT outsourcing, it encompasses much more than that. It can be software as a service (SaaS) where multi-tenancy applications are offered to a wide range of customers and hardware as a service (HaaS) where hardware is shared. Variations such as infrastructure as a service (IaaS) and platform as a service (PaaS) also exist.

Google, MySpace, and Facebook are all offering SaaS based on social networking, while Amazon, Salesforce.com and IBM are offering a form of PaaS. These and other offerings are expanding rapidly in terms of capability, flexibility and maturity.

What is cloud computing?
Cloud computing extends ‘IT as a service’ to the provision of massive infrastructure built on commodity hardware, predominantly open source software and designed from the ground up to cope with widely variable fluctuations in demand. This enables the application solution providers to provide computing power and data storage at a very low cost. Accessed through the internet it decouples computing capability from the hardware itself. From the users’ perspective it largely doesn’t matter, and they may not even be aware where the computing resources are located.

At a technical level, cloud computing can be considered a subset or variant of grid computing, or a combination of grid, service oriented architecture (SOA) and virtualisation (See Figure 2). At a basic level it is simply utility computing – as in the delivery of computing resources in the same sense as electricity or water. However, there is a difference since a utility computing service could provide you with say, one virtual server for a month - at a price - with limited flexibility in terms of the contract. A cloud computing service could provide the processing power of 720 servers for 1 hour for the same price – or whatever combination the user may choose in a very flexible way.

The cloud (Figure 2 below)
History and context
The advances in virtualisation, the need to reduce power consumption and the requirement of organisations to continually access more processing power and more data storage are helping to make the cloud real. This has evolved over many years and has grown from simple timesharing to large scale grid computing accessed over the web by multiple companies.

Time sharing
This involves the sharing of a computer’s resources among many users through multi-tasking. The advent of the mini-computer and then the PC largely killed off the concept of time sharing due to the increasingly low cost of processing power. Interestingly the Internet has brought the concept of time sharing back, although it is largely hidden to most people. A web server is really a time sharing system for all of the users accessing it.

Utility computing
The provision of computing services in the same way as a public utility. It involves the use of computing power that is considerably larger than the single computer used for time sharing. This idea has been around since at least 1960. However, it is fair to say that it is only recently that the true concept of ‘utility’ has become practical.

Distributed computing
This involves breaking what may be considered to be a single program into pieces that run simultaneously on multiple computers, quite often with different characteristics and possibly at different locations.

Grid computing
This is used to solve large scale computational problems by breaking the problem into smaller pieces that can be run on each computer in the grid. Typically a grid is using the idle time of computers on a network and can harness many thousands of computers – or even millions if using the Internet as the network. Types of grid include computational, data and application service provision. Grids are typically used on computers that the customer owns and as such the computing power is sometimes seen as being ‘spare’ and therefore free.

How big is the cloud?
According to Gartner, “By 2012, 80 percent of Fortune 1000 enterprises will pay for some cloud computing service and 30 percent of them will pay for cloud computing infrastructure. Through 2010, more than 80 percent of enterprise use of cloud computing will be devoted to very large data queries, short-term massively parallel workloads, or IT use by startups with little to no IT infrastructure.”

These estimates were supported by Merrill Lynch, who in May 2008 predicted that the cloud computing market could reach US$160 bn by 2011. This is based on the growth in the number of providers of cloud computing infrastructure and the hardware they are bringing on stream. Amazon has around 30,000 servers and this number is growing steadily. In comparison, Google has over 500,000 servers and could have as many as one million. It is estimated they install 100,000 servers per quarter in 36 data centres across the globe. Not to be left behind, it is reported that Microsoft is adding 10,000 servers a month to their cloud and that Bill Gates indicated that they will have "many millions" of servers in a network of data centres.

Accessing the cloud
Conceptually the cloud is just a vast number of servers upon which one’s application can be deployed. However, in reality, the software stack and solution architecture is critical along with design, deployment and management. Specific consideration must be given to integration with other systems, including legacy ones, the management of the user identity, security and single sign on. Further, a successful cloud solution allows applications to run whenever they are needed, for example, to be elastic, while also being fully monitored and managed. As such, the elements that make up a cloud based application are not dissimilar to that of a modern enterprise system architecture, but the differences are how they are put together, managed and sold (see Figure 3).

As Gartner said, “Virtualised hardware alone is not enough for companies to realise the true value of the cloud. A managed, automated way to architect, deploy, scale and maintain software which supports applications in virtualised environments is needed, along with infrastructure software that enables organisations to actually use the cloud to its fullest potential.”

Deploying an application on the cloud (Figure 3 below)
Leveraging the cloud to meet Solvency II
The computational requirements of Solvency II are driving the need for more and more computing power and data storage requirements accessible on a scalable basis. Accessing the large reservoir of resources embedded in the cloud offers interesting possibilities and is likely to be a significant component of actuarial systems in the future. However, for actuaries and insurance companies to embrace this, potential concerns over control, auditability, data protection, security, latency, availability and service level agreements will need to be addressed. Therefore, it is not just a case of just taking an actuarial model and launching it into the cloud.

Appropriate use of the cloud will be achieved by developing the architecture of new actuarial systems from the ground up and building actuarial models that use the right frameworks and run on the right platforms. These applications will use resources and services in a carefully controlled and structured environment so that this cloud does have a silver lining!


Martin Sher is a Managing Director at SolveXia Pty Limited and Brett McDowall is the Chief Architect at Object Consulting Pty Ltd