Posts Tagged ‘Privacy’
A Facebook experiment from late 2012 made news earlier this year and raised the ethical question of whether, by using free services provided via the Internet and mobile apps, we have granted informed consent to be experimented on for whatever purposes.
The On the Media TLDR audio podcast recently posted an interview with Christian Rudder, the co-founder of the free dating website OkCupid, who recently blogged about how OkCupid experiments on its users in a post with the intentionally provocative title We Experiment On Human Beings!
While this revelation understandably attracted a lot of attention, at least OkCupid is not trying to hide what it’s doing. Furthermore, as Rudder blogged, “guess what, everybody: if you use the Internet, you’re the subject of hundreds of experiments at any given time, on every site. That’s how websites work.”
During the interview, Rudder made an interesting comparison between what websites like Facebook and OkCupid do and how psychologists and other social scientists have been experimenting on human beings for decades. This point resonated with me since I have read a lot of books that explore how and why humans behave and think the way we do. Just a few examples are Predictably Irrational by Dan Ariely, You Are Not So Smart by David McRaney, Nudge by Richard Thaler and Cass Sunstein, and Thinking, Fast and Slow by Daniel Kahneman.
Most of the insights discussed in these books are based on the results of countless experiments, most of which were performed on college students since they can be paid in pizza, course credit, or beer money. The majority of the time the subjects in these experiments are not fully informed about the nature of the experiment. In fact, many times they are intentionally misinformed in order to not skew the results of the experiment.
Rudder argued the same thing is done to improve websites. So why do we see hallowed halls when we envision the social scientists behind university research, but we see creepy cubicles when we envision the data scientists behind website experimentation? Perhaps we trust academic more than commercial applications of science.
During the interview, Rudder addressed the issue of trust. Users of OkCupid are trusting the service to provide them with good matches and Rudder acknowledged how experimenting on users can seem like a violation of that trust. “However,” Rudder argued, “doing experiments to make sure that what we’re recommending is the best job that we could possibly do is upholding that trust, not violating it.”
It’s easy to argue that the issue of informed consent regarding experimentation on a dating or social networking website is not the same as informed consent regarding government surveillance, such as last year’s PRISM scandal. The latter is less about experimentation and more about data privacy, where often we are our own worst enemy.
But who actually reads the terms and conditions for a website or mobile app? If you do not accept the terms and conditions, you can’t use it, so most of us accept them by default without bothering to read them. Technically, this constitutes informed consent, which is why it may simply be an outdated concept in the information age.
The information age needs enforced accountability (aka privacy through accountability), which is less about informed consent and more about holding service providers accountable for what they do with our data. This includes the data resulting from the experiments they perform on us. Transparency is an essential aspect of that accountability, allowing us to make an informed decision about what websites and mobile apps we want to use.
However, to Rudder’s point, we are fooling ourselves if we think that such transparency would allow us to avoid using the websites and mobile apps that experiment on us. They all do. They have to in order to be worth using.
While the era of Big Data invokes concerns about privacy and surveillance, we still tender our privacy as currency for Internet/mobile-based services as the geo-location tags, date-time stamps, and other information associated with our phone calls, text messages, emails, and social networking status updates become the bits and bytes of digital bread crumbs we scatter along our daily paths as our self-surveillance avails companies and governments with the data needed to track us, target us with personalized advertising, and terrorize us with the thought of always being watched.
Even though it creeps us out when we stop to think about it, we’ve become so accustomed to this new digital normal that it’d be more difficult than we may realize to stop. Robert Hillard recently blogged about his failed attempt to live for one day without creating big data. “Nowhere is the right to anonymity enshrined in the digital age,” Hillard concluded. “The reality is that we leave a big data trail whether we like it or not. While the vast majority of that data is never used, we are not in control.”
In their book Big Data: A Revolution That Will Transform How We Live, Work, and Think, Viktor Mayer-Schonberger and Kenneth Cukier pondered “the specter of permanent memory” the data we create but aren’t in control of conjures as “risk that one can never escape one’s past because the digital records can always be dredged up. Our personal data hovers over us like the Sword of Damocles threatening to impale us years hence with some private detail or regrettable purchase.”
The Sword of Damocles hung above his head by a wire-thin horsehair. The Data of Damocles hangs over our heads by a wire-less web of cloud-enabled mobile services that hovers above us wheresoever we go and which could come crashing down upon us without warning.
“For decades an essential principle of privacy laws around the world,” Mayer-Schonberger and Cukier explained, “has been to put individuals in control by letting them decide whether, how, and by whom their personal information may be processed. In the Internet age, this laudable ideal has often morphed into a formulaic system of notice and consent. In the era of big data, however, when much of data’s value is in secondary uses that may have been unimagined when the data was collected, such a mechanism to ensure privacy is no longer suitable.”
They discussed a regulatory shift from privacy by consent to privacy through accountability, focusing less on individual consent at the time of data collection and more on holding data users accountable for what they do with that data. They also discussed some technical innovations that can help protect privacy in certain instances, such as the concept of differential privacy, which deliberately blurs the data so that a query of a large dataset doesn’t reveal exact results but only approximate ones, thereby making it difficult to associate particular data points with particular people.
“In many fields, from nuclear technology to bioengineering,” Mayer-Schonberger and Cukier concluded, “we first build tools that we discover can harm us and only later set out to devise the safety mechanisms to protect us from those new tools. In this regard, big data takes its place alongside other areas of society that present challenges with no absolute solutions, just ongoing questions about how we order our world. Just as the printing press led to changes in the way society governs itself, so too does big data. It forces us to confront new challenges with new solutions. To ensure that people are protected at the same time as the technology is promoted, we must not let big data develop beyond the reach of human ability to shape the technology.”
I’ve watched over a number of months as major digital providers across handsets, telecommunications, internet services and virtually every other integrated offering have one by one described how they provide information to various governments.
While none of this should be a concern to me as long as I’m doing nothing wrong, I’m left troubled by the vision I painted just ahead of Snowden’s PRISM leak of a world that was as intrusive as that which Orwell painted in his chilling 1984 (Living as far from 1984 as Orwell).
So, I set myself a challenge: could I live for one day leaving a data footprint as light as that of a citizen of the real 1984?
6am. The last thing I did before going to bed was to dig out an ancient alarm clock to replace my smartphone that I typically wake-up to. I know that my phone virtually lives in the cloud and almost everything it does leaves a trail. To be sure that I silenced the digital hum of that part of my life, I powered it completely down.
6.30am. Clearly I haven’t prepared well enough, on the way to the station, I realised that the smartcard I use for the train is registered in my name. Our train system does let you buy a smartcard without identifying yourself (as long as you use cash and don’t top it up online) so I had to allow some extra time. It’s a good thing I’m not driving given that I would have had to have taken the back streets to avoid using the electronic tag our toll roads use.
7am. On the train, I notice how my fellow commuters are almost all engaged with their smart devices. I probably should have bought a newspaper, something I haven’t needed on a train for years.
7.30am. I grab my usual coffee and breakfast, I hesitate before handing over my loyalty card (a paper relic), increasingly my preferred cafe will join many others and move their coffee scheme onto one of the mobile apps.
8am. I decide that using my building security pass is OK. I’m making the rules up as I go, but justify using the pass on the basis that the data belongs to my employer and is no different to the time sheets I’m sure my 1984 predecessor would have filled out.
8.10am. I start-up my laptop and email. I’m conscious that I’m leaving a trail within the office network, but I think I’m OK with my self-imposed rules but I’m definitely pushing the boundaries!
8.25am. A bit harder now, I’ve just noticed an email from a client that requires a response. My 1984 rule definitely won’t let me send an email over the public internet so I revert to making a phone call. Having said that, the personal touch is appreciated but took a bit longer.
9am. I’m in trouble now with my rules, I’ve just realised that I’ve left my mobile voicemail on without diverting it to my office. I wonder if this is a fail and what could someone tell electronically through my voicemail usage that they didn’t know in 1984? The phone hacking scandal that engulfed the media in the UK comes immediately to mind.
9.30am. I head off to speak at a conference and have to take an extra moment to get someone to give me directions – I can’t use maps either from my PC or smartphone without leaving a trail. A number of delegates have been tweeting and I think about whether I have to ask them not to. Admittedly, in 1984 it would have been a matter of public record that I was speaking, but probably less easily brought together for anyone prying into my movements.
11.30am. I go to read a document by saving the PDF to my Dropbox account and give myself a quick mental slap on the wrist. Using Dropbox leaves a digital trail. Worse, I now need to do some HR performance reviews, but I realised that these products are cloud hosted and hence leave a trail across the Internet.
Lunch. I’ve gotten into the habit of walking for 10 minutes listening to music over my streaming service or calling my wife while I go to get lunch. Neither is allowed today as both would leave a dense trail of digital crumbs!
Rather than walk, I decide to have a quick bite in a local café with a colleague. I realise that I have just a few dollars left in cash. Clearly using an ATM wouldn’t be allowed, but I could stop by a bank branch just like my 1984 predecessor.
Apart from not having time (a common 2013 problem) to visit a bank branch, I also wonder whether today’s withdrawal leaves a much bigger digital trail than its 1984 equivalent even when done in person. Instead I pay with my credit card using the argument that they existed in 1984 and the basic process was the same. I’m left wondering whether the fact that the 1984 trail was paper-based is materially different to the electronic data I’ve left behind today.
2pm. Now I’ve stuffed-up. I had thought that I was OK logging onto my office network, but I needed to have switched off all of the cloud services before they even started. Looks like any prying eyes can work out that I was online – although my lack of activity might make the record sparse.
3pm. I have long-since given up my earlier attempt to avoid sending external email and have responded to client questions and the scheduling of appointments. I conveniently self-justified using an argument based on my own advice on the use of email (Reclaim email as a business tool) but I’m probably on the wrong side of this argument as the email trail is richer than any paper memo or letter.
6pm. I’m heading home and stop by a supermarket to pick-up bread and milk, and almost instinctively hand-over my loyalty card before stopping myself dead in my tracks. I get a very strange look from the checkout staff member as I ask for my card back before he scans it!
7pm. Paranoia is now really setting in. I’m trying to work out if our cable TV service can tell whether we are switched on and, if so, what we are watching. I decide that as long as I disconnect the Ethernet cable then it’s probably OK.
10pm. Reading in bed, trouble is the book I’m reading is on my Kindle. Finally, lights-out and at least they had electricity in 1984, but no smart meters!
Ultimately I failed to live for a day without creating Big Data. Jason Bourne I’m not going to be! Even if I had managed it, the absence of a digital foot print is as telling as the presence of one. If suspected of a crime, then a citizen going to all of the trouble of being invisible would immediately be a suspect (in fact, this has already happened). Nowhere is the right to anonymity enshrined in the digital age.
The reality is that we leave a Big Data trail whether we like it or not. While the vast majority of that data is never used, we are not in control. I have previously argued that you should own your own data.
Perhaps the ultimate irony is that by publishing this post on the Internet, anyone wondering why I went “dark” for a day will be able to fill in all of the missing pieces.
In my post from this past summer Through a PRISM, Darkly, I blogged about how ours is a world still struggling to come to terms with having more aspects of our everyday lives, both personal and professional, captured as data.
We rarely consider the data privacy implications of our brave new data world, which prompted me to ask why we are so concerned about the government accessing data that in many instances we voluntarily gave to companies like Google, which provides free services (not counting the money we do pay for our mobile phone plans and to our Internet service providers) that are not really free because we pay for them with our privacy.
“Google has sucked millions of people into its web by delivering a feature-packed email service that comes only at the price of our privacy,” David Braue recently blogged.
“We must face the unavoidable reality that we have sold our souls for free email. Think about it: We bleat and scream to the hills about the government’s invasions of our privacy, then turn around and mail our personal information using a service specifically designed to harvest that information.”
“Google has positioned Gmail as a gateway drug to a world where everything runs according to Google. Google wants to manage our photos, our social media, our email, our word-processing documents, our everyday tasks, even our general documents.”
“This is the brave new world of the Internet,” Braue argued, “where privacy is an historical footnote and we are tricked or simply bribed to give it up. By and large, we are quite happy to do so. We may not love the need to deliver our personal lives on a platter in exchange for a spam-free, easily-accessible and substantially awesome email experience — but we do so with a smile, over and over again.”
To Braue’s point, no one is forcing us to use Gmail. Many, myself included, use it for the convenience of managing multiple email accounts across multiple mobile devices.
And Google is certainly not our only enemy combatant in what I have previously dubbed the Data Cold War. However, when we trade convenience for privacy, we have to admit the inconvenient truth that Pogo taught us long ago: “We have met the enemy and he is us.”
We don’t give away those slips of paper in our wallets without realizing that’s a form of currency. And we don’t give away the digital currency that is our credit card numbers (e.g., via Twitter, you could use a single tweet to post seven of your credit card numbers, with one space after each 16-digit number, and hashtag it with #MyCreditCardNumbers — but I will assume you would not).
However, we do give away countless bytes of our personal data in exchange for Internet/mobile-based services that we consider to be free because, unlike the companies providing those services, we do not count personally identifiable information as a form of currency.
The reality is our privacy is currency — and we are giving it away.
“It’s all about bucks, kid. The rest is conversation.”
–Michael Douglass as Gordon Gekko, Wall Street (1987)
Sporting more than 60 million users, Evernote is one of the most popular productivity apps out there these days. You may in fact use the app to store audio notes, video, pics, websites, and perform a whole host of other tasks.
Last month I wrote about the collection of personal information by business and government and compared the loss of privacy to George Orwell’s predictions for 1984 (see “Living as far from 1984 as Orwell”). Barely had I written the post and the news of the PRISM leak hit the news.
Some months ago I predicted in the media that there would be a major breach of trust in 2013 (for an example of the coverage see “Cyber security, cloud top disruptive tech trends”). When I made those comments I wasn’t willing to predict what that event might be, but certainly the controversy around PRISM is causing many people to ask whether their personal activities are being tracked.
I am not buying into the debate on whether the US PRISM program is legitimate or desirable. I do, however, argue that any activity that involves personal information and which comes as a surprise to the owners will naturally risk a backlash. This is as true of individual businesses as it is of governments.
Combined with online fraud and hacking, there are signs that the general public is starting to lose confidence in the technology that they have embraced so enthusiastically.
Although people lose confidence in technology (and the ICT industry as a whole), they still want the convenience of the products that they have learnt to use. Whether it is location services, digital media or social media, people value these additions to their lives. When I originally spoke to the media about the potential loss of trust in 2013 I also predicted that any short-term concerns would be alleviated as the general public turned to brands they trusted.
The role of these brands is not just to stand as a beacon of trust, rather they have an opportunity to clearly establish the terms of service and provide an extra layer of security and managed privacy. Ultimately these trusted brands can negotiate agreements across the ICT industry and government to use common personally controlled records (as I’ve written abot previously in articles such as “You should own your own data”) putting the control back into the hands of the individual.
Our digital world can add so much to society and the economy. It is up to all of us, as pioneers in this information revolution to find the solutions that will replicate the protections that were built into the processes of the analogue and paper world that had evolved over more than two hundred years.
By now, most people have heard about the NSA surveillance program known as PRISM that, according to a recent AP story, is about even bigger data seizure. Without question, big data has both positive and negative aspects, but these recent revelations are casting more light on the darker side of big data, especially its considerable data privacy implications.
A few weeks ago, Robert Hillard blogged about how we are living as far from 1984 as George Orwell, whose dystopian novel 1984, which was published in 1948, has recently spiked in sales since apparently Big Brother is what many people see when they look through a PRISM, darkly.
Although the Big Data and Big Brother comparison was being made before the PRISM story broke, it’s now more frequently discussed and debated, especially in regard to government, and rightfully so.
However, I still can’t help but wonder if we’re overreacting to this issue. (more…)
Over the last month I’ve been talking a lot about personally controlled records and the ownership of your own information. For more background, see last month’s post and a discussion I took part in on ABC radio.
The strength of the response reinforces to me that this is an area that deserves greater focus. On the one hand, we want business and government to provide us with better services and to effectively protect us from danger. On the other, we don’t want our personal freedoms to be threatened. The question for us to ask is whether we are at risk of giving up our personal freedom and privacy by giving away our personal information.
I couldn’t help but think about that most obvious of literary works: George Orwell’s 1984. Like many teenagers of my generation, I read the book for the first time in 1984 right at the peak of the Cold War. My overwhelming feeling was one of cultural arrogance, Orwell had gotten it wrong and the story did not apply to my society even though it probably was relevant for others.
In 2013 we are nearly as far from the year 1984 as George Orwell was when he wrote the book in 1948. Arguable, as much has happened since 1984 as had occurred between 1948 and 1984. The book introduces many interesting ideas including “telescreens”, “thoughtcrime” and “newspeak”. While the forces that Orwell wrote about have not been the driver for these concepts to come to reality, much of their essence may well have slipped into our society without us noticing.
The ubiquitous telescreen of the book was a frightening device that combined a television with a camera which allowed authorities to watch what you were doing at all times. While the technology has been around since Orwell himself, it really hasn’t been until the rise of the smartphone that constant monitoring has become possible.
While we aren’t being monitored visually, we are increasingly giving away large amounts of personal information in terms of our location. Worse, it is starting to become a suspicious act when we choose to take ourselves off this form of tracking for a period of time. To see how this is playing out in the courts, just look at criminal trials where the defendant is asked to justify why they’ve turned their phone off at the time of a crime taking place.
In the 1984 that we all experienced, freedom of thought was entrenched through institutions such as a free press and free libraries supporting research without fear of surveillance. By 2013, many of these institutions have either moved online entirely or are well on their way to doing so. Far from providing the protection of a library system that ensured complete confidentiality of research topics, any government can see what interests most of its citizens choose to pursue through Wikipedia or any other research tool.
The Orwellian concept of thoughtcrime assumed that there was some sort of hint at unconventional thoughts that could be a risk to their society. It is easy to see that today’s governments could come to the same conclusion using the tools of the internet to identify what they deem to be antisocial interests.
Finally, newspeak was the language that the shadowy rulers of 1984 were creating to dumb-down the population and discourage thoughtcrimes. While it might be a stretch, it is staggering to see how the short-form of modern messaging such as Twitter is encouraging a simplification of our language which is finding its way into the mainstream.
It is easy to write a post that claims conspiracies at every turn. Far from arguing a major government plan to undermine our freedom, I thought that it was interesting to see that many of George Orwell’s fears are coming true. The cause is not an oppressive government but rather an eagerness by the population as a whole to move services onto new platforms without demanding the same level of protection that their previous custodians have provided for a couple of centuries or more.
In virtually every country there is a debate around privacy, driven most recently by the rise of Big Data, social networks and technology more generally. Without doubt, the major technology companies recognise the value of the information they are collecting from individuals and are working very hard to minimise the impact of privacy changes by putting as much control as possible in the hands of the individual.
This debate will only intensify as the social networks push to also own the identity space. Just look at the number of websites that encourage you to use Facebook, Google or similar services to log in. In doing so, yet more of your data is available to these businesses which you are providing in exchange for the service of having your identity conveniently confirmed.
Citizens and consumers are becoming increasingly aware of the value of their personal information and hence are looking for more privacy options. The default position of governments has been to revisit their privacy regulations. This approach, though, is doomed from the start as no regulation can possibly keep up with this rapidly developing information economy.
In fact, it is likely to be economic forces that will provide the solution to the privacy risks of Big Data. Both government and businesses are beginning to realise that they can make individuals much more comfortable to share their data if rather than providing an almost infinite (and incomprehensible) set of privacy settings they simply renounce any attempt at taking over ownership of the data and simply borrow or lease it with the permission of the true owner – you.
This approach is referred to as personally controlled records and is made possible by the databases that support the growth of Big Data. No longer is it necessary to extract every piece of information and replicate it many times over in order to support complex analytics. Rather it can be packaged as a neat record, kept in the control of the individual and purged upon expiry or the revocation of the lease that has been provided.
Rather than reducing the value to business and government, this approach actually opens up a huge array of new possibilities and leaves the individual in control. The evidence is growing that when people feel confident that they can withdraw their information at any time and are not at risk of unintended consequences they are much more willing to submit their data for a wide array of purposes.
The protection of the individual in a world of Big Data is not through better privacy, but rather through clarifying who actually owns the data and ensuring that their rights are maintained.
Hear more in a Sky News interview I provided recently on trends in technology.
TODAY: Tue, March 28, 2017March2017