Adeetya Tantia reflects on increasing concern around data privacy, and whether it may halt the predicted personalisation of insurance pricing
Privacy is the top concern in today’s society. We spend most of our days on a device, moving from screen to screen, interacting with content that is compiling a comprehensive profile of us. To ensure user privacy and data security, laws must catch up.
Since Edward Snowden released classified documents about surveillance apparatus in 2013, people have started to pay more attention. Corporate internet surveillance started as a way to tailor ads by analysing users’ behaviour patterns, but technology has since enhanced this via innovations such as browser cookies, device fingerprinting, social media activity, location tracking and facial recognition. Combining data from these techniques results in a complete and total history of a user’s behaviour and movements. In recent years we have seen how this can be used to target individuals in elections, make incessant robocalls, or push radicalising content via algorithms. With the advent of scalable machine learning techniques and the need for vast amounts of data to train models, corporate surveillance will only increase.
Some governments and companies are acting on privacy concerns, with the most impactful legislation being the EU’s General Data Protection Regulation. Countries such as Brazil are passing their own data privacy laws and regulations, as are US states – most notably California’s Consumer Privacy Act. While this is a step in the right direction, more is needed to secure users’ privacy.
Certain companies have started to step up: Apple has introduced well-received features for user privacy, barring the fiasco regarding its detection system for child sexual abuse material. However, other companies seem
to be going the opposite way, betting that the benefits of their service will outweigh privacy concerns – see Microsoft’s watered down protections in Windows 10 and 11.
The privacy community has been setting up decentralised versions of websites such
as Twitter (Mastodon) and YouTube (PeerTube), and building privacy-focused front-ends for people who cannot leave a platform. Such alternatives exist for many services, but are less convenient. The value of Google and Apple systems is their ease and comfort, generating insights from emails, calendar and location data and syncing devices. Such systems are possible because the company has access to all its users’ data.
However, privacy-focused companies such as ProtonMail and Brave challenge the notion that user data is a requirement for such systems. These systems are in their infancy and are not as sophisticated as established market players, but their growth has proven there is demand for them. The privacy community is a proponent of free and open-source software, building privacy-focused operating systems, providing software with default end-to-end encryption, and maintaining code repositories so any individual can inspect the code.
Actuarial work relies on large datasets, from which we glean insights and train models. Privacy concerns will hopefully be relieved by laws that make gathering such data more difficult. Telematics is one concern, harvesting location history, driving behaviour and personal information. Health data collected by fitness watches and trackers is another, gathering vast amounts of data that can build comprehensive health profiles and track user behaviour. Internet of Things devices such as thermostats, smoke detectors and security systems, meanwhile, are valuable data sources in property protection. Insurance companies are keen to access these data points because they provide accurate user profiles – and here lies the issue.
The rights of users to have data deleted is a further area of concern: data deletion can adversely affect models, and may increase costs if large computer-intensive models need to be recalibrated as a result.
The potential for data misuse is a strong argument for measures to address concerns. Such measures could include gaining user consent; educating users about the data collected and their rights around it; reassuring users about data security, both in transit and when it is ‘at rest’ in servers; and placing limits on data access within organisations and how it may be used.
While data is needed in insurance to price products and account for claims, privacy concerns disrupt modelling techniques based on vast data pools. Tackling these concerns and abiding by laws will require algorithmic oversight and stable, complex systems that give users a say in how their data is used. The age of hyper-specific personalised pricing may not come to pass at all.
Adeetya Tantia is student editor.
Image Credit | Simon Scarsbrook