Chris Seekings and Kate Pearce talk to Carly Kind about the opportunities brought by big data and artificial intelligence – and the challenges of ensuring they benefit all of society

Big data and artificial intelligence (AI) are transforming society, affecting everything from financial services to healthcare. Businesses can access vast pools of data to offer tailored services, cutting costs while improving customer experience.
However, the Edward Snowden revelations, coupled with an increasing sense of unease about the power of big tech companies, has left many fearful of a creeping surveillance state eroding personal freedoms.
Carly Kind, director of independent AI research and deliberative body the Ada Lovelace Institute, believes new protections are needed to alleviate those fears and ensure data works in the interests of consumers.
The business model
Many may be unaware that, as they scroll through a phone or tablet, they are actively producing the data that large technology companies profit from. This information provides details on almost every facet of a person’s life – from diet to medical conditions.
YouTube recorded $15bn of revenue on the back of advertising last year alone, with their data collection allowing brands to target specific customers. “These providers are sitting on a huge treasure trove of information,” Kind explains. “That incentivises them to collect more data so they can gain even more insights to sell.”
Another fear is that data could fall into the wrong hands, especially sensitive information such as medical records. “One of the critiques of the data collection ecosystem, or as I call it, the business model, is that it creates a surveillance base that can be accessed by other actors,” Kind says. “Data is power, so we need to make sure we have trust in whoever has access to it.”
Systemic challenges
It is almost two years since the introduction of the EU’s General Data Protection Regulation (GDPR), designed to safeguard data by harmonising privacy laws across Europe. Kind admits that, while GDPR moved the conversation forward, it “may not be the law to see us into the middle of the century”, when data and AI will become more central to everything we do. “There’s an argument that, on the one hand, the GDPR hasn’t fundamentally altered the collection of data by commercial entities. On the other hand, overcompliance may have prevented data sharing in important ways for some government departments,” she says. “I think those are big, systemic issues that we need to address.”
Another challenge concerns the Facebook-Google duopoly. These two companies have greater access to data than anyone, and their dominance is set to continue thanks to what Kind calls “lock-in network effects”, where people feel unable to choose another provider. “They won’t leave Facebook because all their friends are on that platform,” she continues. “And they don’t have an incentive to work for you because you’re locked in.”
When two companies dominate the data market, they also dictate the future of AI. “Whoever has access to the most data is going to build the best AI,” Kind explains. “Monopolies are problematic because they impede AI research by centralising data in the hands of so few companies.”
Trade-offs
From an actuarial perspective, there is a good argument for breaking up these monopolies and broadening data access. Big data and AI present significant opportunities in forecasting health, for example, potentially allowing companies to tailor insurance products or predict pension liabilities more accurately. “On the other hand, there is a real question about the extent to which we are moving away from the collectivisation of risk,” Kind says. “That’s going to disadvantage a range of communities, particularly people with health conditions or from vulnerable backgrounds – there is a risk that we push those people out of the insurance market altogether.”
Insurers recognised the potential for bias when harnessing data early, and in 2001 agreed a moratorium on predictive genetic testing for life insurance in the UK. Kind expects this issue to become live again, and admits that questions around personalised medicine and insurance are hard to answer from an ethical perspective. “I think the balance between personalisation versus values like solidarity and the collectivisation of risk is tough,” she says. “We need to think more about this question as we move AI into more places, respecting individual autonomy and agency, but also thinking about the collective good and how AI can build cohesive societies. The more we remove identifying features from a data set, the better individuals will be protected – but the algorithm might not be as advanced. I don’t think we can have a one-size-fits-all approach, and absolute trade-offs are the wrong way to go.”
“You could have a system where information is entirely managed by the individual, allowing them to grant permissions on a granular basis”
Trust building
Kind believes there is a danger that we will throw the baby out with the bathwater and reject big data and AI due to privacy and governance fears, potentially missing out on the benefits it could provide. “If people don’t trust these systems then I think there will be a backlash and we won’t get to use them to improve things,” she says.
The NHS is one example where trust could become an issue. The institution holds a vast amount of data and enjoys immense public trust, which it relies on to get people to use its services. “We don’t want to do anything that stops people confiding in their doctors, as society will suffer,” Kind explains. Some hospitals are engaging in public-private partnerships – including Moorfields Eye Hospital, which is providing Google with data to help it build products. However, Google owns the intellectual property of those products after five years. “In what circumstances should a company profit on the back of data owned or stewarded by the NHS?” Kind asks.
She thinks there are a range of different ways to build trust into the process, including education campaigns and opt-outs. “It will slow things down, but that doesn’t have to stop you doing the things you want to do.” However, outcomes will never be ideal while so few companies are at the cutting edge of research. “NHS trusts have all the good data, but DeepMind [an AI subsidiary of Alphabet Inc., which also owns Google] has all the good AI scientists.”
Democratising data
Another way to build trust could involve giving people more say in what is done with their data. It is technically possible for the public to pick and choose who has access to their data, and to monitor its use. “There is certainly a way that you could have very granular access permissions,” Kind says. “In a fairytale future, when all NHS data is digitised, standardised and accessible, you could have a system where information is entirely managed by the individual, allowing them to see which bodies have access to their data and grant permissions on a granular basis.”
She suggests that independent data trusts could be responsible for managing requests for data set access. “They could arbitrate whether or not a particular actor has access to that data, oversee how it’s used, and ensure that the benefits flow back to the people in the trust.”
Allowing the right actors to access people’s data is crucial. Insurance and healthcare are two examples where data and AI could be used for good. “If Google Maps has all this information about how you travel in London, for example, wouldn’t you rather the City of London have that data to help buses and trains run on time?” Kind is a qualified human rights lawyer and believes data can be used to empower people. “We need to think about the public value that’s being created. How do we democratise data to free up access while making sure people have control and rights?
An unstoppable force
The proliferation of technology devices and the connectivity between them has made it inevitable that much of our personal lives exists in the digital realm.
Greater integration of computing systems, transferring data without human interaction, could seem worrying – but Kind says that the increasing pervasiveness of data need not be something to worry about. “Measuring and collecting data on almost every aspect of your life is going to increase, but that doesn’t have to be a bad thing,” she says. “The ability to keep your data secret is diminishing, but it doesn’t have to result in surveillance and monitoring if individuals are put in control of their data and there are sufficient regulations to ensure it’s used for the public good and with trust.”
Kind says that we need a 20-year vision for data governance sooner rather than later, and that a good data-driven future should allow us to make granular decisions about data access. “A bad 2050 vision is one where there’s data everywhere, lots of it is not good quality, everybody has access to it, and we’re constantly being commercialised,” she says. “As someone interested in rights and justice, I think data and AI represent the biggest opportunity, but also the biggest challenge to those issues.”