With no other product but their free software, Facebook’s 1.5 billion monthly users have garnered them a net worth of over $350 billion USD. Google, who also give away their products for the low, low price of absolutely nothing, are worth over $500 billion. How can this be? How can two of the world’s largest companies have a combined value of almost 1 trillion dollars – yet not charge a cent for their product? But Facebook and Google aren’t free.
It doesn’t take a genius to realise they’re not gifting them to us out of the altruistic goodness of their hearts. We’re still paying for their products. With data, not dollars. Their earnings come from selling (or at least, renting out) personal data from their users to advertisers.
Privacy vs utility
This, understandably, gets some people a little antsy. There’s a reason we lock the bathroom door. We’re generally not very happy having every aspect of our lives recorded just to determine if we’re more into Coke or Pepsi. But, as machine learning takes off, the big data generated by these companies has far more valuable uses than just advertising. Everything from diagnosing cancer, to artificial intelligence, to music composition to self driving cars will benefit.
If we truly want these sorts of technologies to succeed, we’re going to need to give away even more data than we already do. ‘The Internet of Things’ is coming – and with it, thousands of new and wonderful ways to collect our data. Data that has the potential to accelerate the already tachyon speed of innovation. But the question remains: are we actually asking for this hyper-connected, data-mining future, or does it feel more like something the tech world is forcing on us?
There’s an ongoing debate between technology providers and consumers. Privacy vs utility: how much personal information are we happy to trade for better experiences and technologies? Are companies taking too much of our data for the wrong reasons? Or could we be holding back world-changing innovations by not sharing more?
As companies like Amazon, Google and Facebook creep further into our lives, we need to start asking ourselves some serious questions as to how much of our data we’re willing to share, and to what ends?
‘You’re not the customer, you’re the product’
Now that we’re constantly using so many free software tools, the old saying “If you’re not paying for it, you’re not the customer; you’re the product” has garnered biblical popularity. But being labelled a ‘product’ (while semi-accurate) carries some pretty Orwellian connotations.
Facebook and Google offer us free tools, then track how we use them – jotting down every like, search and click in order to develop a user profile. The products aren’t the tools they provide – it’s the data they generate. User profiles are what advertisers (their real customers) pay for.
While this makes it sound as if Brin, Page and Zuckerberg are, in Marx’s words, ‘hideous, pagan idols, who would not drink the nectar but from the skulls of the slain’, the business models of their companies are pretty traditional and straightforward. Their huge success simply comes from radical improvements thanks to the aid of technology. Basically, it goes like this:
- Create something that attracts and keeps attention.
- Sell it at production-cost or less (or give it away for free) to build up a large audience.
- Sell advertising space on said platform.
Hardly an innovation. Just look at print media. The cost of printing a newspaper has always been more than the price of purchase. And that’s before we take into account journalists’ salaries. Newspapers have never made money through physical sales. Instead, they grow large enough readerships to encourage companies to spend thousands to have their ads included alongside their articles. Like Facebook and Google, our attention is their real product. The same goes for television, radio and even many sporting events.
We’re not lambs to the slaughter. We’re not a mindless drones to be milked of data and brainwashed into buying more crap we don’t need. Facebook users are no more of a ‘product’ than anyone who reads the morning paper. Technology has merely given us a better experience, and advertisers better results than circa-Internet, spray-and-pray marketing tactics could have ever hoped. Let’s stop basing our privacy fears on the horrors of consumerism and evil marketing teams.
Worrying incessantly about frighteningly targeted advertising makes the wrong assumption that the data we provide to these companies will only ever be used for evil. The reality is far, far different.
Your data and medical science
Microsoft researchers recently ran an experiment using machine learning algorithms and the search history of Bing users. Using two data sets – search queries from those who had recently been diagnosed with pancreatic adenocarcinoma, and those that hadn’t – they were able to make some incredible predictions. Based on searches for warning signs such as dark urine, sudden weight loss, nausea and abdominal pain, they found they could correctly identify up to 15% of cases just from their search history alone. What’s more, the system was so accurate, it only generated 1 in 100,000 false positives.
Pancreatic adenocarcinoma is just the beginning. If we all agreed to throw our data into the pot, Bing could seriously ramp up the system, and notify you if it thought you should go in for a check-up with a doctor based on a database of hundreds of different illnesses. This is technology with the potential to save millions of lives.
Google too have medical science in their sights. They’ve funded new life science divisions, such as Calico and Verily, and even released a patent for a smart bathroom. A camera in the mirror could measure skin colour variations, and an ‘ultrasonic bathtub’ could perform echo tests to detect tumours or abnormal growth. But we scoff at the idea. Who wouldn’t? Google, the world’s largest advertising company, wants to watch us in the shower?
Again, we need to ask ourselves: how much privacy are we willing to sacrifice, and to what ends? If your bathroom, Google searches and FitBit could all work in unison to keep a constant eye on your health for you, how much privacy would you be willing to sacrifice?
Big data isn’t just about selling you more crap you don’t need. Nor is it not just about making a better phone app. Big data is now about curing cancer. It’s about getting us to mars. It’s about public safety. Big data promises to make life easier, more fun and less stressful. If any tech movement can help us essentially live longer, it has to be something worth taking seriously.
But, there is certainly a question as to how the data is being stored, and by who. While the data isn’t exactly being used for any malicious purposes now – could my search history, private Facebook messages or medical information end up on a server with security flaws that make it vulnerable to hacking? Which brings us to the question of security.
We worry about privacy when security fails
‘Privacy’ relates to how much data they track, whereas ‘security’ relates to how they store it. While we generally trust businesses to not do anything malicious with the data they collect on us, we need to ensure they protect it from others that might not be so principled.
Security failings often come with severe consequences. Last year, for instance, caches of data were stolen from the servers of a Canadian firm called Avid Life Media – a company best known for running the particularly unethical dating platform Ashley Madison. Unlike other online dating websites, Ashley Madison’s USP was that it could help unhappily married people cheat on their partners. “Life is short. Have an affair” was even their slogan. The site quickly attracted millions of users, who, despite how incriminating it eventually ended up being, happily handed over their details for access to the site.
A group of hackers, obviously perturbed a company was monetising infidelity, set out to name and shame participants, and soon, the names and email addresses of 30 million accounts were released to the public. While it might be difficult to sympathise with adulterers – the resulting divorces, resignations, and tragically, suicides, were nothing to be laughed at.
Privacy was seemingly an issue to nobody until the curtains flew apart and the world could see inside.
It’s situations like this that should scare us into not revealing too much online. Google stores our location history to provide more relevant search results – but what if this information was leaked? What if my mirror could be hacked, giving industrious voyeurs a live feed into my bathroom? We’ve established that there is good reason for these businesses to track out data, but we need to be able to trust them to keep it under lock and key.
Privacy, security and utility – where do we go from here?
It’s my belief that the benefits of big data are too great to forgo – for consumers, businesses and society as a whole. However the privacy tradeoff is too much of a slippery slope when we consider how sensitive the data needed is in combination with haunting reminders of leaks in the past. If businesses expect us to accept that the cost of admission to their products is more and more of our personal information, they need to double-down on security.
The solution to the problem of security is not clear. Whether it be a more solid method of anonymising data without irrevocably damaging its usefulness, more encryption or harsher punishment for black-hat hackers is yet to be seen. We’ll leave that up to the boffins.
Regardless, if we want to reap the benefits of big data, better security should be our primary focus – not building constantly-listening electronic ears just to make it easier to order a pizza. Once we’ve nailed that, then – and only then – can we can move onto the world-changing medical science.
Until that day, it’s all up to us. You have the option to turn off a lot of the tracking these companies conduct – or, stop using them entirely. So for now, just ask yourself the question: how much privacy are you willing to risk sacrificing for better technology?