Syed Danish Ali looks at the many faces of innovation causing a revolution in healthcare across the globe

The progress of big data has caused an algorithmic revolution in healthcare. While it is difficult to wrap our heads around the myriad of innovation taking place with an endless stream of new technologies, algorithms, terminologies and operational developments, some areas are taken as 'cases in point', so we are able to gain a better insight into the revolution that is taking place in healthcare in recent times.
Some of the 'more exotic niches' of big data are:
Deep learning
Up until recently, the artificial intelligence (AI) element of data science was looked upon cautiously, owing to its history of booms and busts. In the latest stream of events, major improvements have taken place in this field, and now deep learning - the new leading front for artificial intelligence - presents much promise for overcoming the problems of big data.
Deep learning is a method of machine learning that undertakes calculations in a layered fashion, starting from high-level abstractions - vision, language and other AI-related tasks - to more specific features.
The machine is able to progressively learn as it digests increasing levels of data, and its ability to transform abstract concepts into concrete realities has opened up a diverse plethora of areas where it can be utilised.
Deep learning has various architectures, such as deep neural networks, deep belief networks, deep Boltzmann machines and so on, that are able to handle and decode complex structures that have multiple non-linear features. It also offers us considerable insight into the relatively unknown unstructured data which is 80% of the data that we generate, according to IBM.
While traditional data analysis before 2005 focused on just the tip of the iceberg, the big data revolution sprang up and now deep learning offers us a better glimpse into the unconscious segment of data that we know exists but is constrained in realising its true potential.
Deep learning helps us explore the data and identify connections in descriptive analytics, but these connections also help us forecast what the result is likely to be, given the particular combination as the machine learns from the data.
Using this method, doctors can for the first time use the predictive power of deep learning to directly improve patients' medical outcomes. It can readily handle a broad spectrum of diseases in the entire body, and all imaging modalities (for example, x-rays, CT scans, and so on).
Deep learning contextualises the imaging data by comparing it to large datasets of past images, and by analysing ancillary clinical data, including clinical reports and laboratory studies.
In initial benchmarking tests against the publicly available Lung Image Database Consortium dataset, Enlitic technology, the startup utilising deep learning for improving healthcare, detected lung cancer nodules in chest CT images 50% more accurately than an expert panel of radiologists. In initial benchmarking tests, their deep learning tool regularly detected tiny fractures as small as 0.01% of the total x-ray image. This tool can also simultaneously support the analysis of hundreds of diseases, not just a limited specialisation of diseases or one disease.
Topological data analysis
While topology belonged mainly to theoretical physics (Stephen Hawking used topology to study the structure of black holes in Penrose-Hawking singularity theorems), it has been adapted by scientists at Stanford University for studying healthcare, finance and other topics as well.
Topology is a mathematical discipline that studies shape. Topological data analysis refers to the adaptation of this discipline in analysing highly complex data. It draws on the philosophy that all data has an underlying shape and that shape has meaning.
The machine intelligence approach advocated by Ayasdi, a data analysis company, combines topology with machine learning to achieve data-driven insights instead of hypothesis-driven insights. Machine learning on its own has significant limitations. Clustering, for example, requires an arbitrary choice of clusters, which the analyst has to specify.
With dimensionality reduction techniques, the danger is missing the subtle insights included in the data that can potentially prove to be very useful to the analysis. Including topology with machine learning overcomes these drawbacks effectively.
The topology visualisations capture subtle insights in the data, while also representing the global behaviour of the data. From the nodes identified by the topology network diagrams from the data, clusters are identified and each cluster is fitted onto a model that fits it better, so that instead of a one-size-fit-all model, different models are applied to different regions of data for maximum predictive potency.
Topology is applied on structured and unstructured data for precision medicine.
For example, Ayasdi discovered that type 2 diabetes is not a singular disease. It is comprised of three distinct subgroups with three distinct sets of complicating factors. This breakthrough in precision medicine can lead to more effective treatment protocols and better patient outcomes for all type 2 diabetes patients.
For clinical pathway analysis at the Mercy Hospital St Louis in the US, topology visualisation revealed a high dimensional network for better knee replacement surgeries. This found that a group of medical directors had patients with consistently lower hospital stays and costs. They were using a specific analgesic that was not widely used elsewhere in the Mercy Hospital.
Drones for medicine
Technology is moving forward at an unequal pace, with rural, developing and lower-income regions losing out in this race. The developing world is more in need of drones as a viable means of leapfrogging over infrastructural inadequacies. This might narrow the gap between developed and developing worlds.
The first medical application of unmanned aerial vehicles (UAVs) is likely to be disaster relief, where the logistics of distributing blood products is often a bigger problem than supply. The ability of UAVs to travel over closed roads and rugged terrain without risk to a flight crew seems to make them ideal for use in disaster areas. Drones successfully delivered small aid packages after the Haitian earthquake in 2012, and in Papua New Guinea, Doctors Without Borders used them to transport dummy TB test samples from a remote village to the large coastal city of Kerema.
Hence, just as telematics is revolutionising data capturing and analysis for motor insurance, the data contained within the drones, along with data gained from access to rural and far-flung areas and situations of emergency, can generate rich data that could not be previously captured or analysed.
Rural areas, poor areas and emergency crises suffer from a lack of data generation beyond core facts like earthquake magnitude or number of people killed and injured, and so drones can generate big data that will help us capture the dynamics of such situations better.
Although these big data trends can seem overarching and consuming, it is useful to keep a few sobering meditations in mind:
Some of the 'more exotic niches' of big data are:
- Deep learning
- Topological data analysis
- Drones for medicine.
Deep learning
Up until recently, the artificial intelligence (AI) element of data science was looked upon cautiously, owing to its history of booms and busts. In the latest stream of events, major improvements have taken place in this field, and now deep learning - the new leading front for artificial intelligence - presents much promise for overcoming the problems of big data.
Deep learning is a method of machine learning that undertakes calculations in a layered fashion, starting from high-level abstractions - vision, language and other AI-related tasks - to more specific features.
The machine is able to progressively learn as it digests increasing levels of data, and its ability to transform abstract concepts into concrete realities has opened up a diverse plethora of areas where it can be utilised.
Deep learning has various architectures, such as deep neural networks, deep belief networks, deep Boltzmann machines and so on, that are able to handle and decode complex structures that have multiple non-linear features. It also offers us considerable insight into the relatively unknown unstructured data which is 80% of the data that we generate, according to IBM.
While traditional data analysis before 2005 focused on just the tip of the iceberg, the big data revolution sprang up and now deep learning offers us a better glimpse into the unconscious segment of data that we know exists but is constrained in realising its true potential.
Deep learning helps us explore the data and identify connections in descriptive analytics, but these connections also help us forecast what the result is likely to be, given the particular combination as the machine learns from the data.
Using this method, doctors can for the first time use the predictive power of deep learning to directly improve patients' medical outcomes. It can readily handle a broad spectrum of diseases in the entire body, and all imaging modalities (for example, x-rays, CT scans, and so on).
Deep learning contextualises the imaging data by comparing it to large datasets of past images, and by analysing ancillary clinical data, including clinical reports and laboratory studies.
In initial benchmarking tests against the publicly available Lung Image Database Consortium dataset, Enlitic technology, the startup utilising deep learning for improving healthcare, detected lung cancer nodules in chest CT images 50% more accurately than an expert panel of radiologists. In initial benchmarking tests, their deep learning tool regularly detected tiny fractures as small as 0.01% of the total x-ray image. This tool can also simultaneously support the analysis of hundreds of diseases, not just a limited specialisation of diseases or one disease.
Topological data analysis
While topology belonged mainly to theoretical physics (Stephen Hawking used topology to study the structure of black holes in Penrose-Hawking singularity theorems), it has been adapted by scientists at Stanford University for studying healthcare, finance and other topics as well.
Topology is a mathematical discipline that studies shape. Topological data analysis refers to the adaptation of this discipline in analysing highly complex data. It draws on the philosophy that all data has an underlying shape and that shape has meaning.
The machine intelligence approach advocated by Ayasdi, a data analysis company, combines topology with machine learning to achieve data-driven insights instead of hypothesis-driven insights. Machine learning on its own has significant limitations. Clustering, for example, requires an arbitrary choice of clusters, which the analyst has to specify.
With dimensionality reduction techniques, the danger is missing the subtle insights included in the data that can potentially prove to be very useful to the analysis. Including topology with machine learning overcomes these drawbacks effectively.
The topology visualisations capture subtle insights in the data, while also representing the global behaviour of the data. From the nodes identified by the topology network diagrams from the data, clusters are identified and each cluster is fitted onto a model that fits it better, so that instead of a one-size-fit-all model, different models are applied to different regions of data for maximum predictive potency.
Topology is applied on structured and unstructured data for precision medicine.
For example, Ayasdi discovered that type 2 diabetes is not a singular disease. It is comprised of three distinct subgroups with three distinct sets of complicating factors. This breakthrough in precision medicine can lead to more effective treatment protocols and better patient outcomes for all type 2 diabetes patients.
For clinical pathway analysis at the Mercy Hospital St Louis in the US, topology visualisation revealed a high dimensional network for better knee replacement surgeries. This found that a group of medical directors had patients with consistently lower hospital stays and costs. They were using a specific analgesic that was not widely used elsewhere in the Mercy Hospital.
Drones for medicine
Technology is moving forward at an unequal pace, with rural, developing and lower-income regions losing out in this race. The developing world is more in need of drones as a viable means of leapfrogging over infrastructural inadequacies. This might narrow the gap between developed and developing worlds.
The first medical application of unmanned aerial vehicles (UAVs) is likely to be disaster relief, where the logistics of distributing blood products is often a bigger problem than supply. The ability of UAVs to travel over closed roads and rugged terrain without risk to a flight crew seems to make them ideal for use in disaster areas. Drones successfully delivered small aid packages after the Haitian earthquake in 2012, and in Papua New Guinea, Doctors Without Borders used them to transport dummy TB test samples from a remote village to the large coastal city of Kerema.
Hence, just as telematics is revolutionising data capturing and analysis for motor insurance, the data contained within the drones, along with data gained from access to rural and far-flung areas and situations of emergency, can generate rich data that could not be previously captured or analysed.
Rural areas, poor areas and emergency crises suffer from a lack of data generation beyond core facts like earthquake magnitude or number of people killed and injured, and so drones can generate big data that will help us capture the dynamics of such situations better.
Although these big data trends can seem overarching and consuming, it is useful to keep a few sobering meditations in mind:
- We have to realise what the Nobel laureate Ronald Coase says here: "Torture the data long enough, and it will confess anything" as we can suffer too profoundly even from small data glitches.
- The experience of all deep datasets is slow. They must wait a long time until they know what has fallen into their depths. Machine learning can lower that waiting time.
- Generally, there is either over-reliance on data and models or negligible reliance on them. We have to be familiar with the golden mean that resides between two vices. So here our data and modeling orientation should be in between the extremes of reliance on only opinions and only data and models.
- We can be missing the bigger picture if we do not consider the intentions - philosophy, cognitive system, behavioral bias, and so on - used in building data, models and expert's analysis. We need to consider philosopher Friedrich Nietzsche's point that "the decisive value of an action may lie precisely in what is unintentional in it The intention is only a sign and a symptom, something which still needs interpretation, and furthermore a sign which carries too many meanings and, thus, by itself alone means almost nothing".
Syed Danish Ali is a senior actuarial consultant at SIR Consultants
Filed in
Topics