Should we privilege openness, or privacy? Respect the common good, or the rights of the individual? And how should companies navigate this space in the world of big data and AI – should we take our proprietary data and make it open access, or should we jealously guard it as the source of our competitive advantage?
I found myself taking up an unpopular position recently, when I was honoured to attend a gathering of thoughtful and engaged business leaders at the Finnish Ambassadorial Residence here in London last week.*
I was there for a seminar on “AI Shaping the Future” and had pushed back on comments by Risto Siilasmaa, Chair of the Board of Nokia, who advised the room that companies need to start “buying today the data that you will need in three years”. I suggested that if you’re buying the knowledge you need then you are baking failure into your business model, and argued that companies need to think hard about creating proprietary datasets in order to survive.
It became quite the thing for every subsequent contributor to begin their speech by saying how much they disagreed with me.
Here’s the issue. You can use public or bought data if you need to get started, but you have to make it richer, start a closed loop of creation and enhancement within your organisation. To paraphrase Cal Newport, in the knowledge economy, if you don’t create, you won’t succeed.
Mr Siilasmaa’s advice to “start buying today the data you will need in three years”, is certainly better than failing to adopt artificial intelligence into your business, and it’s the start of a data science strategy. But it’s just the start. If you’re buying it, then everyone can buy it, including your competitors. So, unless you’re doing something magic with it (and you’re not), then it’s not a sustainable route to competitive advantage. Yes, you can do very interesting things with public data using machine learning, and we do that every day with our partners. But the clients who are really carving out a future of competitive advantage are creating.
That’s the point about the AI age – you have to be deploying data science, or machine learning, whatever you want to call it, just to survive. And the projects where we really add value are the ones in which we build very clever models to augment expertise that the company already has. Then we’re taking data, ideally from the client’s own processes, and building a mechanism to add human insights, using machine-human collaboration. And the very best use cases create a circular, closed loop of data creation, where the human insights feed back and make the model more accurate over time.
To be fair, some excellent points were made in favour of open source, publicly accessible data and data sharing, especially in fields where the common good is at stake, like education, or medical research. We all want a cure for what ails us. But, as the excellent work of people like Latanya Sweeney at Harvard shows, anonymous data is often not as anonymous as it looks to a human eye. You can take the names away, but lots of machine learning is about predicting outcomes based on incomplete data, after all.
So, are we comfortable with a universal principle of “open data”, if that means turning our backs on individual privacy, or even medical confidentiality? And what about when that public medical archive for research starts having implications for how our health insurance is priced? Maybe, just maybe, open source data would actually lead to less socialisation of common goods, and more personal attribution of costs, because the more we know, the more we can be more granular about assigning individual responsibility. That’s a far cry from the ideal of a fairer and more democratic society built on the sharing of information.
And how do we incentivise research, if no one owns the benefits of the findings? This is a problem that we clearly still haven’t solved with regard to Big Pharma and medical patents, and that’s not for want of trying. No one wants a medical system run by a superintelligence with the ethics of Martin Shkreli. And yet we certainly do want researchers who are incentivised, financially or otherwise, to help us develop the next generation of antibiotics.
To be clear, I’m not saying the answer is a world of closed data, every man and woman for themselves, and forget about the common good. I am committed to the ethical core that guides us here at Silo.AI, which is “AI for People”. In fact, you can broaden that to “Technology for People”. That’s why our technical approach favours human-machine collaboration, just as our ethical bias favours projects that produce unambiguous social or environmental benefits (harder than you might think). We have to acknowledge that these issues are extremely complex. Fuzzy, sanctimonious thinking doesn’t often lead to progress in the domains of ethics and logistics on a societal scale.
Get it touch to know more how our human-in-the-loop AI solutions might help you.
*(With thanks to our gracious host, Her Excellency Madame Luostarinen for a fantastic day. The Finnish embassy here in London is doing great work at fostering the links between British and Finnish businesses, which are deeper than I ever realised – especially in technology and finance.)